I0821 05:54:07.778766 10 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0821 05:54:07.783225 10 e2e.go:124] Starting e2e run "7fe122bb-c636-4df0-97ba-7299c43827f7" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1597989231 - Will randomize all specs Will run 275 of 4992 specs Aug 21 05:54:08.350: INFO: >>> kubeConfig: /root/.kube/config Aug 21 05:54:08.400: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 21 05:54:08.601: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 21 05:54:08.775: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 21 05:54:08.775: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 21 05:54:08.775: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 21 05:54:08.818: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 21 05:54:08.818: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 21 05:54:08.818: INFO: e2e test version: v1.18.8 Aug 21 05:54:08.824: INFO: kube-apiserver version: v1.18.8 Aug 21 05:54:08.827: INFO: >>> kubeConfig: /root/.kube/config Aug 21 05:54:08.844: INFO: Cluster IP family: ipv4 SSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 05:54:08.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api Aug 21 05:54:08.936: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 21 05:54:09.000: INFO: Waiting up to 5m0s for pod "downwardapi-volume-de6f6512-dd2f-4e67-b588-595b706d011e" in namespace "downward-api-6483" to be "Succeeded or Failed" Aug 21 05:54:09.042: INFO: Pod "downwardapi-volume-de6f6512-dd2f-4e67-b588-595b706d011e": Phase="Pending", Reason="", readiness=false. Elapsed: 42.485308ms Aug 21 05:54:11.053: INFO: Pod "downwardapi-volume-de6f6512-dd2f-4e67-b588-595b706d011e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05348685s Aug 21 05:54:13.059: INFO: Pod "downwardapi-volume-de6f6512-dd2f-4e67-b588-595b706d011e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058900489s Aug 21 05:54:15.067: INFO: Pod "downwardapi-volume-de6f6512-dd2f-4e67-b588-595b706d011e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.066911891s STEP: Saw pod success Aug 21 05:54:15.067: INFO: Pod "downwardapi-volume-de6f6512-dd2f-4e67-b588-595b706d011e" satisfied condition "Succeeded or Failed" Aug 21 05:54:15.072: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-de6f6512-dd2f-4e67-b588-595b706d011e container client-container: STEP: delete the pod Aug 21 05:54:15.153: INFO: Waiting for pod downwardapi-volume-de6f6512-dd2f-4e67-b588-595b706d011e to disappear Aug 21 05:54:15.160: INFO: Pod downwardapi-volume-de6f6512-dd2f-4e67-b588-595b706d011e no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 05:54:15.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6483" for this suite. • [SLOW TEST:6.335 seconds] [sig-storage] Downward API volume /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":1,"skipped":3,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 05:54:15.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-1263 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating stateful set ss in namespace statefulset-1263 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1263 Aug 21 05:54:15.350: INFO: Found 0 stateful pods, waiting for 1 Aug 21 05:54:25.363: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Aug 21 05:54:25.374: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1263 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 21 05:54:31.015: INFO: stderr: "I0821 05:54:30.875431 35 log.go:172] (0x28a0070) (0x28a01c0) Create stream\nI0821 05:54:30.877452 35 log.go:172] (0x28a0070) (0x28a01c0) Stream added, broadcasting: 1\nI0821 05:54:30.887740 35 log.go:172] (0x28a0070) Reply frame received for 1\nI0821 05:54:30.888965 35 log.go:172] (0x28a0070) (0x2fc72d0) Create stream\nI0821 05:54:30.889103 35 log.go:172] (0x28a0070) (0x2fc72d0) Stream added, broadcasting: 3\nI0821 05:54:30.890955 35 log.go:172] (0x28a0070) Reply frame received for 3\nI0821 05:54:30.891169 35 log.go:172] (0x28a0070) (0x28a0a80) Create stream\nI0821 05:54:30.891225 35 log.go:172] (0x28a0070) (0x28a0a80) Stream added, broadcasting: 5\nI0821 05:54:30.892444 35 log.go:172] (0x28a0070) Reply frame received for 5\nI0821 05:54:30.961631 35 log.go:172] (0x28a0070) Data frame received for 5\nI0821 05:54:30.962002 35 log.go:172] (0x28a0a80) (5) Data frame handling\nI0821 05:54:30.962667 35 log.go:172] (0x28a0a80) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 05:54:30.988926 35 log.go:172] (0x28a0070) Data frame received for 5\nI0821 05:54:30.989182 35 log.go:172] (0x28a0a80) (5) Data frame handling\nI0821 05:54:30.989385 35 log.go:172] (0x28a0070) Data frame received for 3\nI0821 05:54:30.989470 35 log.go:172] (0x2fc72d0) (3) Data frame handling\nI0821 05:54:30.989559 35 log.go:172] (0x2fc72d0) (3) Data frame sent\nI0821 05:54:30.989649 35 log.go:172] (0x28a0070) Data frame received for 3\nI0821 05:54:30.989710 35 log.go:172] (0x2fc72d0) (3) Data frame handling\nI0821 05:54:30.991162 35 log.go:172] (0x28a0070) Data frame received for 1\nI0821 05:54:30.991308 35 log.go:172] (0x28a01c0) (1) Data frame handling\nI0821 05:54:30.991461 35 log.go:172] (0x28a01c0) (1) Data frame sent\nI0821 05:54:30.992518 35 log.go:172] (0x28a0070) (0x28a01c0) Stream removed, broadcasting: 1\nI0821 05:54:30.994991 35 log.go:172] (0x28a0070) Go away received\nI0821 05:54:30.998021 35 log.go:172] (0x28a0070) (0x28a01c0) Stream removed, broadcasting: 1\nI0821 05:54:30.998258 35 log.go:172] (0x28a0070) (0x2fc72d0) Stream removed, broadcasting: 3\nI0821 05:54:30.998451 35 log.go:172] (0x28a0070) (0x28a0a80) Stream removed, broadcasting: 5\n" Aug 21 05:54:31.016: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 21 05:54:31.017: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 21 05:54:31.024: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 21 05:54:41.033: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 21 05:54:41.033: INFO: Waiting for statefulset status.replicas updated to 0 Aug 21 05:54:41.081: INFO: POD NODE PHASE GRACE CONDITIONS Aug 21 05:54:41.082: INFO: ss-0 kali-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:15 +0000 UTC }] Aug 21 05:54:41.083: INFO: ss-1 Pending [] Aug 21 05:54:41.083: INFO: Aug 21 05:54:41.083: INFO: StatefulSet ss has not reached scale 3, at 2 Aug 21 05:54:42.092: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.974228051s Aug 21 05:54:43.449: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.965889994s Aug 21 05:54:44.461: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.608366776s Aug 21 05:54:45.469: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.596857587s Aug 21 05:54:46.478: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.588778106s Aug 21 05:54:47.487: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.579511874s Aug 21 05:54:48.497: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.570461497s Aug 21 05:54:49.506: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.560815346s Aug 21 05:54:50.515: INFO: Verifying statefulset ss doesn't scale past 3 for another 551.673816ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1263 Aug 21 05:54:51.566: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1263 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 05:54:52.920: INFO: stderr: "I0821 05:54:52.795197 63 log.go:172] (0x2f48070) (0x2f48150) Create stream\nI0821 05:54:52.798493 63 log.go:172] (0x2f48070) (0x2f48150) Stream added, broadcasting: 1\nI0821 05:54:52.814910 63 log.go:172] (0x2f48070) Reply frame received for 1\nI0821 05:54:52.815914 63 log.go:172] (0x2f48070) (0x29ee5b0) Create stream\nI0821 05:54:52.816040 63 log.go:172] (0x2f48070) (0x29ee5b0) Stream added, broadcasting: 3\nI0821 05:54:52.818388 63 log.go:172] (0x2f48070) Reply frame received for 3\nI0821 05:54:52.818903 63 log.go:172] (0x2f48070) (0x2f48310) Create stream\nI0821 05:54:52.819018 63 log.go:172] (0x2f48070) (0x2f48310) Stream added, broadcasting: 5\nI0821 05:54:52.821106 63 log.go:172] (0x2f48070) Reply frame received for 5\nI0821 05:54:52.898300 63 log.go:172] (0x2f48070) Data frame received for 3\nI0821 05:54:52.898745 63 log.go:172] (0x2f48070) Data frame received for 5\nI0821 05:54:52.898968 63 log.go:172] (0x2f48310) (5) Data frame handling\nI0821 05:54:52.899101 63 log.go:172] (0x29ee5b0) (3) Data frame handling\nI0821 05:54:52.899325 63 log.go:172] (0x2f48070) Data frame received for 1\nI0821 05:54:52.899463 63 log.go:172] (0x2f48150) (1) Data frame handling\nI0821 05:54:52.900432 63 log.go:172] (0x29ee5b0) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0821 05:54:52.901123 63 log.go:172] (0x2f48150) (1) Data frame sent\nI0821 05:54:52.901374 63 log.go:172] (0x2f48310) (5) Data frame sent\nI0821 05:54:52.901571 63 log.go:172] (0x2f48070) Data frame received for 5\nI0821 05:54:52.901724 63 log.go:172] (0x2f48310) (5) Data frame handling\nI0821 05:54:52.902130 63 log.go:172] (0x2f48070) Data frame received for 3\nI0821 05:54:52.902351 63 log.go:172] (0x29ee5b0) (3) Data frame handling\nI0821 05:54:52.904041 63 log.go:172] (0x2f48070) (0x2f48150) Stream removed, broadcasting: 1\nI0821 05:54:52.906017 63 log.go:172] (0x2f48070) Go away received\nI0821 05:54:52.909294 63 log.go:172] (0x2f48070) (0x2f48150) Stream removed, broadcasting: 1\nI0821 05:54:52.909509 63 log.go:172] (0x2f48070) (0x29ee5b0) Stream removed, broadcasting: 3\nI0821 05:54:52.909686 63 log.go:172] (0x2f48070) (0x2f48310) Stream removed, broadcasting: 5\n" Aug 21 05:54:52.922: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 21 05:54:52.922: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 21 05:54:52.922: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1263 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 05:54:54.311: INFO: stderr: "I0821 05:54:54.163860 85 log.go:172] (0x2ac5f10) (0x2ac5f80) Create stream\nI0821 05:54:54.167030 85 log.go:172] (0x2ac5f10) (0x2ac5f80) Stream added, broadcasting: 1\nI0821 05:54:54.179579 85 log.go:172] (0x2ac5f10) Reply frame received for 1\nI0821 05:54:54.180100 85 log.go:172] (0x2ac5f10) (0x2c460e0) Create stream\nI0821 05:54:54.180180 85 log.go:172] (0x2ac5f10) (0x2c460e0) Stream added, broadcasting: 3\nI0821 05:54:54.182576 85 log.go:172] (0x2ac5f10) Reply frame received for 3\nI0821 05:54:54.183162 85 log.go:172] (0x2ac5f10) (0x2cd0150) Create stream\nI0821 05:54:54.183336 85 log.go:172] (0x2ac5f10) (0x2cd0150) Stream added, broadcasting: 5\nI0821 05:54:54.185330 85 log.go:172] (0x2ac5f10) Reply frame received for 5\nI0821 05:54:54.293894 85 log.go:172] (0x2ac5f10) Data frame received for 3\nI0821 05:54:54.294195 85 log.go:172] (0x2ac5f10) Data frame received for 5\nI0821 05:54:54.294334 85 log.go:172] (0x2cd0150) (5) Data frame handling\nI0821 05:54:54.294440 85 log.go:172] (0x2c460e0) (3) Data frame handling\nI0821 05:54:54.294831 85 log.go:172] (0x2c460e0) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0821 05:54:54.295239 85 log.go:172] (0x2cd0150) (5) Data frame sent\nI0821 05:54:54.295613 85 log.go:172] (0x2ac5f10) Data frame received for 3\nI0821 05:54:54.295715 85 log.go:172] (0x2c460e0) (3) Data frame handling\nI0821 05:54:54.295815 85 log.go:172] (0x2ac5f10) Data frame received for 5\nI0821 05:54:54.295948 85 log.go:172] (0x2cd0150) (5) Data frame handling\nI0821 05:54:54.296064 85 log.go:172] (0x2ac5f10) Data frame received for 1\nI0821 05:54:54.296160 85 log.go:172] (0x2ac5f80) (1) Data frame handling\nI0821 05:54:54.296233 85 log.go:172] (0x2ac5f80) (1) Data frame sent\nI0821 05:54:54.297154 85 log.go:172] (0x2ac5f10) (0x2ac5f80) Stream removed, broadcasting: 1\nI0821 05:54:54.298751 85 log.go:172] (0x2ac5f10) Go away received\nI0821 05:54:54.301100 85 log.go:172] (0x2ac5f10) (0x2ac5f80) Stream removed, broadcasting: 1\nI0821 05:54:54.301341 85 log.go:172] (0x2ac5f10) (0x2c460e0) Stream removed, broadcasting: 3\nI0821 05:54:54.301550 85 log.go:172] (0x2ac5f10) (0x2cd0150) Stream removed, broadcasting: 5\n" Aug 21 05:54:54.311: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 21 05:54:54.312: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 21 05:54:54.312: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1263 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 05:54:55.666: INFO: stderr: "I0821 05:54:55.548966 107 log.go:172] (0x2dba000) (0x2dba070) Create stream\nI0821 05:54:55.551420 107 log.go:172] (0x2dba000) (0x2dba070) Stream added, broadcasting: 1\nI0821 05:54:55.568486 107 log.go:172] (0x2dba000) Reply frame received for 1\nI0821 05:54:55.569148 107 log.go:172] (0x2dba000) (0x2c3c7e0) Create stream\nI0821 05:54:55.569239 107 log.go:172] (0x2dba000) (0x2c3c7e0) Stream added, broadcasting: 3\nI0821 05:54:55.570561 107 log.go:172] (0x2dba000) Reply frame received for 3\nI0821 05:54:55.570774 107 log.go:172] (0x2dba000) (0x2a92070) Create stream\nI0821 05:54:55.570842 107 log.go:172] (0x2dba000) (0x2a92070) Stream added, broadcasting: 5\nI0821 05:54:55.571934 107 log.go:172] (0x2dba000) Reply frame received for 5\nI0821 05:54:55.644807 107 log.go:172] (0x2dba000) Data frame received for 3\nI0821 05:54:55.645176 107 log.go:172] (0x2dba000) Data frame received for 5\nI0821 05:54:55.645326 107 log.go:172] (0x2a92070) (5) Data frame handling\nI0821 05:54:55.645440 107 log.go:172] (0x2dba000) Data frame received for 1\nI0821 05:54:55.645568 107 log.go:172] (0x2dba070) (1) Data frame handling\nI0821 05:54:55.645775 107 log.go:172] (0x2c3c7e0) (3) Data frame handling\nI0821 05:54:55.646796 107 log.go:172] (0x2a92070) (5) Data frame sent\nI0821 05:54:55.646915 107 log.go:172] (0x2c3c7e0) (3) Data frame sent\nI0821 05:54:55.647197 107 log.go:172] (0x2dba000) Data frame received for 3\nI0821 05:54:55.647326 107 log.go:172] (0x2c3c7e0) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0821 05:54:55.647768 107 log.go:172] (0x2dba070) (1) Data frame sent\nI0821 05:54:55.648208 107 log.go:172] (0x2dba000) Data frame received for 5\nI0821 05:54:55.648405 107 log.go:172] (0x2a92070) (5) Data frame handling\nI0821 05:54:55.651509 107 log.go:172] (0x2dba000) (0x2dba070) Stream removed, broadcasting: 1\nI0821 05:54:55.652296 107 log.go:172] (0x2dba000) Go away received\nI0821 05:54:55.656678 107 log.go:172] (0x2dba000) (0x2dba070) Stream removed, broadcasting: 1\nI0821 05:54:55.657181 107 log.go:172] (0x2dba000) (0x2c3c7e0) Stream removed, broadcasting: 3\nI0821 05:54:55.657770 107 log.go:172] (0x2dba000) (0x2a92070) Stream removed, broadcasting: 5\n" Aug 21 05:54:55.667: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 21 05:54:55.667: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 21 05:54:55.687: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 21 05:54:55.688: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 21 05:54:55.688: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Aug 21 05:54:55.695: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1263 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 21 05:54:57.056: INFO: stderr: "I0821 05:54:56.951981 132 log.go:172] (0x2f66070) (0x2f660e0) Create stream\nI0821 05:54:56.955158 132 log.go:172] (0x2f66070) (0x2f660e0) Stream added, broadcasting: 1\nI0821 05:54:56.968506 132 log.go:172] (0x2f66070) Reply frame received for 1\nI0821 05:54:56.969852 132 log.go:172] (0x2f66070) (0x2f66310) Create stream\nI0821 05:54:56.970004 132 log.go:172] (0x2f66070) (0x2f66310) Stream added, broadcasting: 3\nI0821 05:54:56.972332 132 log.go:172] (0x2f66070) Reply frame received for 3\nI0821 05:54:56.972661 132 log.go:172] (0x2f66070) (0x2f664d0) Create stream\nI0821 05:54:56.972837 132 log.go:172] (0x2f66070) (0x2f664d0) Stream added, broadcasting: 5\nI0821 05:54:56.974356 132 log.go:172] (0x2f66070) Reply frame received for 5\nI0821 05:54:57.034516 132 log.go:172] (0x2f66070) Data frame received for 5\nI0821 05:54:57.034762 132 log.go:172] (0x2f664d0) (5) Data frame handling\nI0821 05:54:57.034908 132 log.go:172] (0x2f66070) Data frame received for 1\nI0821 05:54:57.035079 132 log.go:172] (0x2f660e0) (1) Data frame handling\nI0821 05:54:57.035296 132 log.go:172] (0x2f66070) Data frame received for 3\nI0821 05:54:57.035450 132 log.go:172] (0x2f66310) (3) Data frame handling\nI0821 05:54:57.035540 132 log.go:172] (0x2f660e0) (1) Data frame sent\nI0821 05:54:57.035885 132 log.go:172] (0x2f66310) (3) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 05:54:57.036142 132 log.go:172] (0x2f664d0) (5) Data frame sent\nI0821 05:54:57.036231 132 log.go:172] (0x2f66070) Data frame received for 3\nI0821 05:54:57.036309 132 log.go:172] (0x2f66310) (3) Data frame handling\nI0821 05:54:57.036466 132 log.go:172] (0x2f66070) Data frame received for 5\nI0821 05:54:57.036594 132 log.go:172] (0x2f664d0) (5) Data frame handling\nI0821 05:54:57.037328 132 log.go:172] (0x2f66070) (0x2f660e0) Stream removed, broadcasting: 1\nI0821 05:54:57.040043 132 log.go:172] (0x2f66070) Go away received\nI0821 05:54:57.042582 132 log.go:172] (0x2f66070) (0x2f660e0) Stream removed, broadcasting: 1\nI0821 05:54:57.042770 132 log.go:172] (0x2f66070) (0x2f66310) Stream removed, broadcasting: 3\nI0821 05:54:57.042911 132 log.go:172] (0x2f66070) (0x2f664d0) Stream removed, broadcasting: 5\n" Aug 21 05:54:57.057: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 21 05:54:57.057: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 21 05:54:57.057: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1263 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 21 05:54:58.499: INFO: stderr: "I0821 05:54:58.313684 155 log.go:172] (0x28a98f0) (0x28a9960) Create stream\nI0821 05:54:58.316871 155 log.go:172] (0x28a98f0) (0x28a9960) Stream added, broadcasting: 1\nI0821 05:54:58.333153 155 log.go:172] (0x28a98f0) Reply frame received for 1\nI0821 05:54:58.333952 155 log.go:172] (0x28a98f0) (0x28a82a0) Create stream\nI0821 05:54:58.334058 155 log.go:172] (0x28a98f0) (0x28a82a0) Stream added, broadcasting: 3\nI0821 05:54:58.336001 155 log.go:172] (0x28a98f0) Reply frame received for 3\nI0821 05:54:58.336270 155 log.go:172] (0x28a98f0) (0x2cd41c0) Create stream\nI0821 05:54:58.336343 155 log.go:172] (0x28a98f0) (0x2cd41c0) Stream added, broadcasting: 5\nI0821 05:54:58.337543 155 log.go:172] (0x28a98f0) Reply frame received for 5\nI0821 05:54:58.420890 155 log.go:172] (0x28a98f0) Data frame received for 5\nI0821 05:54:58.421126 155 log.go:172] (0x2cd41c0) (5) Data frame handling\nI0821 05:54:58.421508 155 log.go:172] (0x2cd41c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 05:54:58.478483 155 log.go:172] (0x28a98f0) Data frame received for 3\nI0821 05:54:58.478798 155 log.go:172] (0x28a82a0) (3) Data frame handling\nI0821 05:54:58.479001 155 log.go:172] (0x28a98f0) Data frame received for 5\nI0821 05:54:58.479485 155 log.go:172] (0x2cd41c0) (5) Data frame handling\nI0821 05:54:58.479795 155 log.go:172] (0x28a82a0) (3) Data frame sent\nI0821 05:54:58.480037 155 log.go:172] (0x28a98f0) Data frame received for 3\nI0821 05:54:58.480190 155 log.go:172] (0x28a82a0) (3) Data frame handling\nI0821 05:54:58.480401 155 log.go:172] (0x28a98f0) Data frame received for 1\nI0821 05:54:58.480624 155 log.go:172] (0x28a9960) (1) Data frame handling\nI0821 05:54:58.480890 155 log.go:172] (0x28a9960) (1) Data frame sent\nI0821 05:54:58.482973 155 log.go:172] (0x28a98f0) (0x28a9960) Stream removed, broadcasting: 1\nI0821 05:54:58.485588 155 log.go:172] (0x28a98f0) Go away received\nI0821 05:54:58.488906 155 log.go:172] (0x28a98f0) (0x28a9960) Stream removed, broadcasting: 1\nI0821 05:54:58.489206 155 log.go:172] (0x28a98f0) (0x28a82a0) Stream removed, broadcasting: 3\nI0821 05:54:58.489445 155 log.go:172] (0x28a98f0) (0x2cd41c0) Stream removed, broadcasting: 5\n" Aug 21 05:54:58.500: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 21 05:54:58.500: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 21 05:54:58.500: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1263 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 21 05:54:59.899: INFO: stderr: "I0821 05:54:59.735382 177 log.go:172] (0x30af500) (0x30af570) Create stream\nI0821 05:54:59.740976 177 log.go:172] (0x30af500) (0x30af570) Stream added, broadcasting: 1\nI0821 05:54:59.757048 177 log.go:172] (0x30af500) Reply frame received for 1\nI0821 05:54:59.757573 177 log.go:172] (0x30af500) (0x2f66070) Create stream\nI0821 05:54:59.757650 177 log.go:172] (0x30af500) (0x2f66070) Stream added, broadcasting: 3\nI0821 05:54:59.759004 177 log.go:172] (0x30af500) Reply frame received for 3\nI0821 05:54:59.759282 177 log.go:172] (0x30af500) (0x30ae070) Create stream\nI0821 05:54:59.759363 177 log.go:172] (0x30af500) (0x30ae070) Stream added, broadcasting: 5\nI0821 05:54:59.760508 177 log.go:172] (0x30af500) Reply frame received for 5\nI0821 05:54:59.836233 177 log.go:172] (0x30af500) Data frame received for 5\nI0821 05:54:59.836422 177 log.go:172] (0x30ae070) (5) Data frame handling\nI0821 05:54:59.836810 177 log.go:172] (0x30ae070) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 05:54:59.879858 177 log.go:172] (0x30af500) Data frame received for 3\nI0821 05:54:59.879999 177 log.go:172] (0x2f66070) (3) Data frame handling\nI0821 05:54:59.880170 177 log.go:172] (0x30af500) Data frame received for 5\nI0821 05:54:59.880420 177 log.go:172] (0x30ae070) (5) Data frame handling\nI0821 05:54:59.880685 177 log.go:172] (0x2f66070) (3) Data frame sent\nI0821 05:54:59.881126 177 log.go:172] (0x30af500) Data frame received for 3\nI0821 05:54:59.881293 177 log.go:172] (0x2f66070) (3) Data frame handling\nI0821 05:54:59.882251 177 log.go:172] (0x30af500) Data frame received for 1\nI0821 05:54:59.882404 177 log.go:172] (0x30af570) (1) Data frame handling\nI0821 05:54:59.882580 177 log.go:172] (0x30af570) (1) Data frame sent\nI0821 05:54:59.884003 177 log.go:172] (0x30af500) (0x30af570) Stream removed, broadcasting: 1\nI0821 05:54:59.885423 177 log.go:172] (0x30af500) Go away received\nI0821 05:54:59.889000 177 log.go:172] (0x30af500) (0x30af570) Stream removed, broadcasting: 1\nI0821 05:54:59.889677 177 log.go:172] (0x30af500) (0x2f66070) Stream removed, broadcasting: 3\nI0821 05:54:59.889865 177 log.go:172] (0x30af500) (0x30ae070) Stream removed, broadcasting: 5\n" Aug 21 05:54:59.900: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 21 05:54:59.900: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 21 05:54:59.900: INFO: Waiting for statefulset status.replicas updated to 0 Aug 21 05:54:59.911: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Aug 21 05:55:09.925: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 21 05:55:09.925: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 21 05:55:09.926: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 21 05:55:09.963: INFO: POD NODE PHASE GRACE CONDITIONS Aug 21 05:55:09.963: INFO: ss-0 kali-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:15 +0000 UTC }] Aug 21 05:55:09.963: INFO: ss-1 kali-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:41 +0000 UTC }] Aug 21 05:55:09.964: INFO: ss-2 kali-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:55:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:55:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:41 +0000 UTC }] Aug 21 05:55:09.964: INFO: Aug 21 05:55:09.964: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 21 05:55:10.983: INFO: POD NODE PHASE GRACE CONDITIONS Aug 21 05:55:10.983: INFO: ss-0 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:15 +0000 UTC }] Aug 21 05:55:10.984: INFO: ss-1 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:41 +0000 UTC }] Aug 21 05:55:10.984: INFO: ss-2 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:55:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:55:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:41 +0000 UTC }] Aug 21 05:55:10.984: INFO: Aug 21 05:55:10.985: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 21 05:55:11.997: INFO: POD NODE PHASE GRACE CONDITIONS Aug 21 05:55:11.997: INFO: ss-0 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:15 +0000 UTC }] Aug 21 05:55:11.998: INFO: ss-1 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:41 +0000 UTC }] Aug 21 05:55:11.998: INFO: ss-2 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:55:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:55:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:41 +0000 UTC }] Aug 21 05:55:11.999: INFO: Aug 21 05:55:11.999: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 21 05:55:13.009: INFO: POD NODE PHASE GRACE CONDITIONS Aug 21 05:55:13.009: INFO: ss-0 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:15 +0000 UTC }] Aug 21 05:55:13.010: INFO: ss-1 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:41 +0000 UTC }] Aug 21 05:55:13.010: INFO: ss-2 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:55:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:55:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:41 +0000 UTC }] Aug 21 05:55:13.010: INFO: Aug 21 05:55:13.010: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 21 05:55:14.018: INFO: POD NODE PHASE GRACE CONDITIONS Aug 21 05:55:14.018: INFO: ss-1 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:41 +0000 UTC }] Aug 21 05:55:14.018: INFO: Aug 21 05:55:14.018: INFO: StatefulSet ss has not reached scale 0, at 1 Aug 21 05:55:15.027: INFO: POD NODE PHASE GRACE CONDITIONS Aug 21 05:55:15.027: INFO: ss-1 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:41 +0000 UTC }] Aug 21 05:55:15.028: INFO: Aug 21 05:55:15.028: INFO: StatefulSet ss has not reached scale 0, at 1 Aug 21 05:55:16.036: INFO: POD NODE PHASE GRACE CONDITIONS Aug 21 05:55:16.036: INFO: ss-1 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:41 +0000 UTC }] Aug 21 05:55:16.036: INFO: Aug 21 05:55:16.036: INFO: StatefulSet ss has not reached scale 0, at 1 Aug 21 05:55:17.043: INFO: POD NODE PHASE GRACE CONDITIONS Aug 21 05:55:17.044: INFO: ss-1 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:41 +0000 UTC }] Aug 21 05:55:17.044: INFO: Aug 21 05:55:17.044: INFO: StatefulSet ss has not reached scale 0, at 1 Aug 21 05:55:18.053: INFO: POD NODE PHASE GRACE CONDITIONS Aug 21 05:55:18.053: INFO: ss-1 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:41 +0000 UTC }] Aug 21 05:55:18.054: INFO: Aug 21 05:55:18.054: INFO: StatefulSet ss has not reached scale 0, at 1 Aug 21 05:55:19.061: INFO: POD NODE PHASE GRACE CONDITIONS Aug 21 05:55:19.061: INFO: ss-1 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 05:54:41 +0000 UTC }] Aug 21 05:55:19.061: INFO: Aug 21 05:55:19.062: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1263 Aug 21 05:55:20.068: INFO: Scaling statefulset ss to 0 Aug 21 05:55:20.086: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Aug 21 05:55:20.091: INFO: Deleting all statefulset in ns statefulset-1263 Aug 21 05:55:20.097: INFO: Scaling statefulset ss to 0 Aug 21 05:55:20.112: INFO: Waiting for statefulset status.replicas updated to 0 Aug 21 05:55:20.115: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 05:55:20.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1263" for this suite. • [SLOW TEST:65.007 seconds] [sig-apps] StatefulSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":2,"skipped":25,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 05:55:20.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-c8a7b355-d0fc-4aaa-8627-f50f5aee0524 STEP: Creating a pod to test consume configMaps Aug 21 05:55:20.297: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ff759510-ee01-4000-80dd-0d956995a610" in namespace "projected-3636" to be "Succeeded or Failed" Aug 21 05:55:20.334: INFO: Pod "pod-projected-configmaps-ff759510-ee01-4000-80dd-0d956995a610": Phase="Pending", Reason="", readiness=false. Elapsed: 36.841768ms Aug 21 05:55:22.357: INFO: Pod "pod-projected-configmaps-ff759510-ee01-4000-80dd-0d956995a610": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059223612s Aug 21 05:55:24.363: INFO: Pod "pod-projected-configmaps-ff759510-ee01-4000-80dd-0d956995a610": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065950672s STEP: Saw pod success Aug 21 05:55:24.364: INFO: Pod "pod-projected-configmaps-ff759510-ee01-4000-80dd-0d956995a610" satisfied condition "Succeeded or Failed" Aug 21 05:55:24.369: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-ff759510-ee01-4000-80dd-0d956995a610 container projected-configmap-volume-test: STEP: delete the pod Aug 21 05:55:24.424: INFO: Waiting for pod pod-projected-configmaps-ff759510-ee01-4000-80dd-0d956995a610 to disappear Aug 21 05:55:24.442: INFO: Pod pod-projected-configmaps-ff759510-ee01-4000-80dd-0d956995a610 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 05:55:24.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3636" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":3,"skipped":28,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 05:55:24.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Aug 21 05:55:30.587: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-4329 PodName:pod-sharedvolume-9bf2302a-5779-489c-8298-18bd89d384a8 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 05:55:30.588: INFO: >>> kubeConfig: /root/.kube/config I0821 05:55:30.710749 10 log.go:172] (0x8865490) (0x8865500) Create stream I0821 05:55:30.711540 10 log.go:172] (0x8865490) (0x8865500) Stream added, broadcasting: 1 I0821 05:55:30.732408 10 log.go:172] (0x8865490) Reply frame received for 1 I0821 05:55:30.733720 10 log.go:172] (0x8865490) (0x8865730) Create stream I0821 05:55:30.733871 10 log.go:172] (0x8865490) (0x8865730) Stream added, broadcasting: 3 I0821 05:55:30.736447 10 log.go:172] (0x8865490) Reply frame received for 3 I0821 05:55:30.736965 10 log.go:172] (0x8865490) (0x8968070) Create stream I0821 05:55:30.737078 10 log.go:172] (0x8865490) (0x8968070) Stream added, broadcasting: 5 I0821 05:55:30.739010 10 log.go:172] (0x8865490) Reply frame received for 5 I0821 05:55:30.825837 10 log.go:172] (0x8865490) Data frame received for 3 I0821 05:55:30.826280 10 log.go:172] (0x8865490) Data frame received for 5 I0821 05:55:30.826552 10 log.go:172] (0x8865730) (3) Data frame handling I0821 05:55:30.826820 10 log.go:172] (0x8865490) Data frame received for 1 I0821 05:55:30.826946 10 log.go:172] (0x8865500) (1) Data frame handling I0821 05:55:30.827081 10 log.go:172] (0x8968070) (5) Data frame handling I0821 05:55:30.829347 10 log.go:172] (0x8865500) (1) Data frame sent I0821 05:55:30.829454 10 log.go:172] (0x8865730) (3) Data frame sent I0821 05:55:30.831217 10 log.go:172] (0x8865490) Data frame received for 3 I0821 05:55:30.831748 10 log.go:172] (0x8865490) (0x8865500) Stream removed, broadcasting: 1 I0821 05:55:30.832050 10 log.go:172] (0x8865730) (3) Data frame handling I0821 05:55:30.832660 10 log.go:172] (0x8865490) Go away received I0821 05:55:30.834489 10 log.go:172] (0x8865490) (0x8865500) Stream removed, broadcasting: 1 I0821 05:55:30.834741 10 log.go:172] (0x8865490) (0x8865730) Stream removed, broadcasting: 3 I0821 05:55:30.834925 10 log.go:172] (0x8865490) (0x8968070) Stream removed, broadcasting: 5 Aug 21 05:55:30.835: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 05:55:30.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4329" for this suite. • [SLOW TEST:6.391 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":4,"skipped":30,"failed":0} SS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 05:55:30.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 05:55:30.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-531" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":5,"skipped":32,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 05:55:30.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Aug 21 05:55:31.034: INFO: >>> kubeConfig: /root/.kube/config Aug 21 05:55:40.950: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 05:56:38.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6671" for this suite. • [SLOW TEST:67.485 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":6,"skipped":35,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 05:56:38.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 21 05:56:38.527: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9965ab33-b5e4-4c3b-bfd6-e1e5f6d83b59" in namespace "downward-api-568" to be "Succeeded or Failed" Aug 21 05:56:38.606: INFO: Pod "downwardapi-volume-9965ab33-b5e4-4c3b-bfd6-e1e5f6d83b59": Phase="Pending", Reason="", readiness=false. Elapsed: 78.776552ms Aug 21 05:56:40.706: INFO: Pod "downwardapi-volume-9965ab33-b5e4-4c3b-bfd6-e1e5f6d83b59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178194791s Aug 21 05:56:42.714: INFO: Pod "downwardapi-volume-9965ab33-b5e4-4c3b-bfd6-e1e5f6d83b59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.186454372s STEP: Saw pod success Aug 21 05:56:42.714: INFO: Pod "downwardapi-volume-9965ab33-b5e4-4c3b-bfd6-e1e5f6d83b59" satisfied condition "Succeeded or Failed" Aug 21 05:56:42.719: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-9965ab33-b5e4-4c3b-bfd6-e1e5f6d83b59 container client-container: STEP: delete the pod Aug 21 05:56:42.758: INFO: Waiting for pod downwardapi-volume-9965ab33-b5e4-4c3b-bfd6-e1e5f6d83b59 to disappear Aug 21 05:56:42.770: INFO: Pod downwardapi-volume-9965ab33-b5e4-4c3b-bfd6-e1e5f6d83b59 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 05:56:42.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-568" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":7,"skipped":58,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 05:56:42.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Aug 21 05:56:52.266: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Aug 21 05:56:54.293: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733586212, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733586212, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733586212, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733586212, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 21 05:56:57.353: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 21 05:56:57.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 05:56:58.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-4356" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:15.878 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":8,"skipped":96,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 05:56:58.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 21 05:56:58.778: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-791' Aug 21 05:57:00.357: INFO: stderr: "" Aug 21 05:57:00.357: INFO: stdout: "replicationcontroller/agnhost-master created\n" Aug 21 05:57:00.357: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-791' Aug 21 05:57:01.964: INFO: stderr: "" Aug 21 05:57:01.964: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Aug 21 05:57:03.019: INFO: Selector matched 1 pods for map[app:agnhost] Aug 21 05:57:03.022: INFO: Found 0 / 1 Aug 21 05:57:03.988: INFO: Selector matched 1 pods for map[app:agnhost] Aug 21 05:57:03.988: INFO: Found 0 / 1 Aug 21 05:57:04.973: INFO: Selector matched 1 pods for map[app:agnhost] Aug 21 05:57:04.974: INFO: Found 1 / 1 Aug 21 05:57:04.975: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 21 05:57:04.986: INFO: Selector matched 1 pods for map[app:agnhost] Aug 21 05:57:04.986: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 21 05:57:04.987: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config describe pod agnhost-master-vtkpl --namespace=kubectl-791' Aug 21 05:57:06.194: INFO: stderr: "" Aug 21 05:57:06.194: INFO: stdout: "Name: agnhost-master-vtkpl\nNamespace: kubectl-791\nPriority: 0\nNode: kali-worker2/172.18.0.13\nStart Time: Fri, 21 Aug 2020 05:57:00 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.85\nIPs:\n IP: 10.244.1.85\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://93a93f4271ef8a2bc275b108b70d41e6c2d468c496e9ff93b56b31615b7af132\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 21 Aug 2020 05:57:03 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-625lh (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-625lh:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-625lh\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-791/agnhost-master-vtkpl to kali-worker2\n Normal Pulled 5s kubelet, kali-worker2 Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n Normal Created 3s kubelet, kali-worker2 Created container agnhost-master\n Normal Started 3s kubelet, kali-worker2 Started container agnhost-master\n" Aug 21 05:57:06.196: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-791' Aug 21 05:57:07.451: INFO: stderr: "" Aug 21 05:57:07.451: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-791\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 7s replication-controller Created pod: agnhost-master-vtkpl\n" Aug 21 05:57:07.452: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-791' Aug 21 05:57:08.649: INFO: stderr: "" Aug 21 05:57:08.650: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-791\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.99.98.159\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.85:6379\nSession Affinity: None\nEvents: \n" Aug 21 05:57:08.661: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config describe node kali-control-plane' Aug 21 05:57:09.941: INFO: stderr: "" Aug 21 05:57:09.942: INFO: stdout: "Name: kali-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=kali-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 15 Aug 2020 09:39:46 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: kali-control-plane\n AcquireTime: \n RenewTime: Fri, 21 Aug 2020 05:57:03 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 21 Aug 2020 05:55:12 +0000 Sat, 15 Aug 2020 09:39:43 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 21 Aug 2020 05:55:12 +0000 Sat, 15 Aug 2020 09:39:43 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 21 Aug 2020 05:55:12 +0000 Sat, 15 Aug 2020 09:39:43 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 21 Aug 2020 05:55:12 +0000 Sat, 15 Aug 2020 09:40:21 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.15\n Hostname: kali-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nSystem Info:\n Machine ID: 04bdd55b92ef4b87b98c1323984fd428\n System UUID: 98a7b883-5496-49b8-a15e-cf216c9b1f46\n Boot ID: 11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version: 4.15.0-109-generic\n OS Image: Ubuntu Groovy Gorilla (development branch)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0-rc.1-4-g43366250\n Kubelet Version: v1.18.8\n Kube-Proxy Version: v1.18.8\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-66bff467f8-2567d 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 5d20h\n kube-system coredns-66bff467f8-k8c2r 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 5d20h\n kube-system etcd-kali-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5d20h\n kube-system kindnet-gblkw 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 5d20h\n kube-system kube-apiserver-kali-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 5d20h\n kube-system kube-controller-manager-kali-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 5d20h\n kube-system kube-proxy-2d447 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5d20h\n kube-system kube-scheduler-kali-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 5d20h\n local-path-storage local-path-provisioner-5b4b545c55-988r4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5d20h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Aug 21 05:57:09.943: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config describe namespace kubectl-791' Aug 21 05:57:11.089: INFO: stderr: "" Aug 21 05:57:11.089: INFO: stdout: "Name: kubectl-791\nLabels: e2e-framework=kubectl\n e2e-run=7fe122bb-c636-4df0-97ba-7299c43827f7\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 05:57:11.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-791" for this suite. • [SLOW TEST:12.415 seconds] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:978 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":275,"completed":9,"skipped":178,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 05:57:11.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-downwardapi-dn98 STEP: Creating a pod to test atomic-volume-subpath Aug 21 05:57:11.221: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-dn98" in namespace "subpath-7289" to be "Succeeded or Failed" Aug 21 05:57:11.230: INFO: Pod "pod-subpath-test-downwardapi-dn98": Phase="Pending", Reason="", readiness=false. Elapsed: 8.921615ms Aug 21 05:57:13.287: INFO: Pod "pod-subpath-test-downwardapi-dn98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065636495s Aug 21 05:57:15.295: INFO: Pod "pod-subpath-test-downwardapi-dn98": Phase="Running", Reason="", readiness=true. Elapsed: 4.073083798s Aug 21 05:57:17.303: INFO: Pod "pod-subpath-test-downwardapi-dn98": Phase="Running", Reason="", readiness=true. Elapsed: 6.081837158s Aug 21 05:57:19.310: INFO: Pod "pod-subpath-test-downwardapi-dn98": Phase="Running", Reason="", readiness=true. Elapsed: 8.088619731s Aug 21 05:57:21.318: INFO: Pod "pod-subpath-test-downwardapi-dn98": Phase="Running", Reason="", readiness=true. Elapsed: 10.096841124s Aug 21 05:57:23.326: INFO: Pod "pod-subpath-test-downwardapi-dn98": Phase="Running", Reason="", readiness=true. Elapsed: 12.104910137s Aug 21 05:57:25.334: INFO: Pod "pod-subpath-test-downwardapi-dn98": Phase="Running", Reason="", readiness=true. Elapsed: 14.112659792s Aug 21 05:57:27.342: INFO: Pod "pod-subpath-test-downwardapi-dn98": Phase="Running", Reason="", readiness=true. Elapsed: 16.120651969s Aug 21 05:57:29.364: INFO: Pod "pod-subpath-test-downwardapi-dn98": Phase="Running", Reason="", readiness=true. Elapsed: 18.142997604s Aug 21 05:57:31.372: INFO: Pod "pod-subpath-test-downwardapi-dn98": Phase="Running", Reason="", readiness=true. Elapsed: 20.150318111s Aug 21 05:57:33.380: INFO: Pod "pod-subpath-test-downwardapi-dn98": Phase="Running", Reason="", readiness=true. Elapsed: 22.158726881s Aug 21 05:57:35.407: INFO: Pod "pod-subpath-test-downwardapi-dn98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.185311098s STEP: Saw pod success Aug 21 05:57:35.408: INFO: Pod "pod-subpath-test-downwardapi-dn98" satisfied condition "Succeeded or Failed" Aug 21 05:57:35.414: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-downwardapi-dn98 container test-container-subpath-downwardapi-dn98: STEP: delete the pod Aug 21 05:57:35.631: INFO: Waiting for pod pod-subpath-test-downwardapi-dn98 to disappear Aug 21 05:57:35.837: INFO: Pod pod-subpath-test-downwardapi-dn98 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-dn98 Aug 21 05:57:35.838: INFO: Deleting pod "pod-subpath-test-downwardapi-dn98" in namespace "subpath-7289" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 05:57:35.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7289" for this suite. • [SLOW TEST:25.615 seconds] [sig-storage] Subpath /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":10,"skipped":197,"failed":0} SSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 05:57:36.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7183.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-7183.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7183.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7183.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-7183.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7183.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 21 05:57:45.933: INFO: DNS probes using dns-7183/dns-test-9b87f71d-130e-4e82-a873-f07afc660fb5 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 05:57:46.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7183" for this suite. • [SLOW TEST:10.177 seconds] [sig-network] DNS /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":11,"skipped":201,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 05:57:46.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 21 05:57:47.086: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Aug 21 05:57:52.137: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 21 05:57:52.139: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Aug 21 05:57:56.241: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-7757 /apis/apps/v1/namespaces/deployment-7757/deployments/test-cleanup-deployment bb24079b-52af-47c8-b4a2-9867bb4b49d3 2009054 1 2020-08-21 05:57:52 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2020-08-21 05:57:52 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-21 05:57:55 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x9402cd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-21 05:57:52 +0000 UTC,LastTransitionTime:2020-08-21 05:57:52 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-b4867b47f" has successfully progressed.,LastUpdateTime:2020-08-21 05:57:55 +0000 UTC,LastTransitionTime:2020-08-21 05:57:52 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Aug 21 05:57:56.341: INFO: New ReplicaSet "test-cleanup-deployment-b4867b47f" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-b4867b47f deployment-7757 /apis/apps/v1/namespaces/deployment-7757/replicasets/test-cleanup-deployment-b4867b47f d9bb7095-6b10-4286-a779-b163e4dd125a 2009043 1 2020-08-21 05:57:52 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment bb24079b-52af-47c8-b4a2-9867bb4b49d3 0x942f160 0x942f161}] [] [{kube-controller-manager Update apps/v1 2020-08-21 05:57:55 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 98 50 52 48 55 57 98 45 53 50 97 102 45 52 55 99 56 45 98 52 97 50 45 57 56 54 55 98 98 52 98 52 57 100 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: b4867b47f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x942f1d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 21 05:57:56.359: INFO: Pod "test-cleanup-deployment-b4867b47f-cgz49" is available: &Pod{ObjectMeta:{test-cleanup-deployment-b4867b47f-cgz49 test-cleanup-deployment-b4867b47f- deployment-7757 /api/v1/namespaces/deployment-7757/pods/test-cleanup-deployment-b4867b47f-cgz49 f66aec7c-1420-4b27-b9b9-c58189c4bdd9 2009042 0 2020-08-21 05:57:52 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-b4867b47f d9bb7095-6b10-4286-a779-b163e4dd125a 0x94030f0 0x94030f1}] [] [{kube-controller-manager Update v1 2020-08-21 05:57:52 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 57 98 98 55 48 57 53 45 54 98 49 48 45 52 50 56 54 45 97 55 55 57 45 98 49 54 51 101 52 100 100 49 50 53 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 05:57:55 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 56 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pzj5r,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pzj5r,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pzj5r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 05:57:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 05:57:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 05:57:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 05:57:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.89,StartTime:2020-08-21 05:57:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 05:57:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://436b6b50d198de5ccc70f0df9f353a55aa6415fbd7d14271375dbbc0fecf420d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.89,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 05:57:56.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7757" for this suite. • [SLOW TEST:9.465 seconds] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":12,"skipped":214,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 05:57:56.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Aug 21 05:57:56.664: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2165 /api/v1/namespaces/watch-2165/configmaps/e2e-watch-test-resource-version 8af06fed-1443-40c1-93c1-3689859142d3 2009065 0 2020-08-21 05:57:56 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-08-21 05:57:56 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 21 05:57:56.668: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2165 /api/v1/namespaces/watch-2165/configmaps/e2e-watch-test-resource-version 8af06fed-1443-40c1-93c1-3689859142d3 2009066 0 2020-08-21 05:57:56 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-08-21 05:57:56 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 05:57:56.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2165" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":13,"skipped":224,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 05:57:56.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 21 05:57:56.849: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Aug 21 05:58:15.555: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-528 create -f -' Aug 21 05:58:19.970: INFO: stderr: "" Aug 21 05:58:19.970: INFO: stdout: "e2e-test-crd-publish-openapi-6265-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Aug 21 05:58:19.971: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-528 delete e2e-test-crd-publish-openapi-6265-crds test-foo' Aug 21 05:58:21.078: INFO: stderr: "" Aug 21 05:58:21.078: INFO: stdout: "e2e-test-crd-publish-openapi-6265-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Aug 21 05:58:21.079: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-528 apply -f -' Aug 21 05:58:22.589: INFO: stderr: "" Aug 21 05:58:22.590: INFO: stdout: "e2e-test-crd-publish-openapi-6265-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Aug 21 05:58:22.590: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-528 delete e2e-test-crd-publish-openapi-6265-crds test-foo' Aug 21 05:58:23.739: INFO: stderr: "" Aug 21 05:58:23.739: INFO: stdout: "e2e-test-crd-publish-openapi-6265-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Aug 21 05:58:23.740: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-528 create -f -' Aug 21 05:58:25.186: INFO: rc: 1 Aug 21 05:58:25.187: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-528 apply -f -' Aug 21 05:58:26.634: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Aug 21 05:58:26.635: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-528 create -f -' Aug 21 05:58:28.001: INFO: rc: 1 Aug 21 05:58:28.001: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-528 apply -f -' Aug 21 05:58:29.380: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Aug 21 05:58:29.381: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6265-crds' Aug 21 05:58:30.832: INFO: stderr: "" Aug 21 05:58:30.832: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6265-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Aug 21 05:58:30.835: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6265-crds.metadata' Aug 21 05:58:32.297: INFO: stderr: "" Aug 21 05:58:32.298: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6265-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Aug 21 05:58:32.305: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6265-crds.spec' Aug 21 05:58:33.776: INFO: stderr: "" Aug 21 05:58:33.776: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6265-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Aug 21 05:58:33.778: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6265-crds.spec.bars' Aug 21 05:58:35.186: INFO: stderr: "" Aug 21 05:58:35.186: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6265-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Aug 21 05:58:35.188: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6265-crds.spec.bars2' Aug 21 05:58:36.598: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 05:58:46.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-528" for this suite. • [SLOW TEST:49.519 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":14,"skipped":260,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 05:58:46.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 21 05:58:46.373: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4bb71508-cc77-4a05-963e-b602c903726a" in namespace "downward-api-6536" to be "Succeeded or Failed" Aug 21 05:58:46.425: INFO: Pod "downwardapi-volume-4bb71508-cc77-4a05-963e-b602c903726a": Phase="Pending", Reason="", readiness=false. Elapsed: 51.928045ms Aug 21 05:58:48.433: INFO: Pod "downwardapi-volume-4bb71508-cc77-4a05-963e-b602c903726a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060225958s Aug 21 05:58:50.440: INFO: Pod "downwardapi-volume-4bb71508-cc77-4a05-963e-b602c903726a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067272017s STEP: Saw pod success Aug 21 05:58:50.440: INFO: Pod "downwardapi-volume-4bb71508-cc77-4a05-963e-b602c903726a" satisfied condition "Succeeded or Failed" Aug 21 05:58:50.446: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-4bb71508-cc77-4a05-963e-b602c903726a container client-container: STEP: delete the pod Aug 21 05:58:50.494: INFO: Waiting for pod downwardapi-volume-4bb71508-cc77-4a05-963e-b602c903726a to disappear Aug 21 05:58:50.507: INFO: Pod downwardapi-volume-4bb71508-cc77-4a05-963e-b602c903726a no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 05:58:50.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6536" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":15,"skipped":263,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 05:58:50.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 21 05:58:50.622: INFO: Waiting up to 5m0s for pod "pod-4cbe5a33-f2ab-47ac-87c5-44ed4f07c527" in namespace "emptydir-8913" to be "Succeeded or Failed" Aug 21 05:58:50.638: INFO: Pod "pod-4cbe5a33-f2ab-47ac-87c5-44ed4f07c527": Phase="Pending", Reason="", readiness=false. Elapsed: 16.39515ms Aug 21 05:58:52.645: INFO: Pod "pod-4cbe5a33-f2ab-47ac-87c5-44ed4f07c527": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023429769s Aug 21 05:58:54.652: INFO: Pod "pod-4cbe5a33-f2ab-47ac-87c5-44ed4f07c527": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029871063s STEP: Saw pod success Aug 21 05:58:54.652: INFO: Pod "pod-4cbe5a33-f2ab-47ac-87c5-44ed4f07c527" satisfied condition "Succeeded or Failed" Aug 21 05:58:54.656: INFO: Trying to get logs from node kali-worker2 pod pod-4cbe5a33-f2ab-47ac-87c5-44ed4f07c527 container test-container: STEP: delete the pod Aug 21 05:58:54.703: INFO: Waiting for pod pod-4cbe5a33-f2ab-47ac-87c5-44ed4f07c527 to disappear Aug 21 05:58:54.722: INFO: Pod pod-4cbe5a33-f2ab-47ac-87c5-44ed4f07c527 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 05:58:54.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8913" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":16,"skipped":266,"failed":0} SSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 05:58:54.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Aug 21 05:58:54.909: INFO: Waiting up to 5m0s for pod "downward-api-0ec6d064-e584-4cd1-ac2e-5e89b8128dc7" in namespace "downward-api-2829" to be "Succeeded or Failed" Aug 21 05:58:54.955: INFO: Pod "downward-api-0ec6d064-e584-4cd1-ac2e-5e89b8128dc7": Phase="Pending", Reason="", readiness=false. Elapsed: 45.653009ms Aug 21 05:58:56.963: INFO: Pod "downward-api-0ec6d064-e584-4cd1-ac2e-5e89b8128dc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053507525s Aug 21 05:58:58.971: INFO: Pod "downward-api-0ec6d064-e584-4cd1-ac2e-5e89b8128dc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06142531s STEP: Saw pod success Aug 21 05:58:58.971: INFO: Pod "downward-api-0ec6d064-e584-4cd1-ac2e-5e89b8128dc7" satisfied condition "Succeeded or Failed" Aug 21 05:58:58.977: INFO: Trying to get logs from node kali-worker pod downward-api-0ec6d064-e584-4cd1-ac2e-5e89b8128dc7 container dapi-container: STEP: delete the pod Aug 21 05:58:59.017: INFO: Waiting for pod downward-api-0ec6d064-e584-4cd1-ac2e-5e89b8128dc7 to disappear Aug 21 05:58:59.047: INFO: Pod downward-api-0ec6d064-e584-4cd1-ac2e-5e89b8128dc7 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 05:58:59.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2829" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":17,"skipped":270,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 05:58:59.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 05:59:15.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9720" for this suite. • [SLOW TEST:16.392 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":18,"skipped":287,"failed":0} SSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 05:59:15.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 05:59:34.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4716" for this suite. • [SLOW TEST:18.594 seconds] [sig-apps] Job /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":19,"skipped":292,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 05:59:34.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-7ee2cdde-6f57-443b-915b-8e019d57a2b6 STEP: Creating a pod to test consume secrets Aug 21 05:59:34.176: INFO: Waiting up to 5m0s for pod "pod-secrets-eaa9aee4-edea-4c6e-9056-506b9013594d" in namespace "secrets-6392" to be "Succeeded or Failed" Aug 21 05:59:34.192: INFO: Pod "pod-secrets-eaa9aee4-edea-4c6e-9056-506b9013594d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.385831ms Aug 21 05:59:36.199: INFO: Pod "pod-secrets-eaa9aee4-edea-4c6e-9056-506b9013594d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022482759s Aug 21 05:59:38.205: INFO: Pod "pod-secrets-eaa9aee4-edea-4c6e-9056-506b9013594d": Phase="Running", Reason="", readiness=true. Elapsed: 4.028551138s Aug 21 05:59:40.212: INFO: Pod "pod-secrets-eaa9aee4-edea-4c6e-9056-506b9013594d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035253871s STEP: Saw pod success Aug 21 05:59:40.212: INFO: Pod "pod-secrets-eaa9aee4-edea-4c6e-9056-506b9013594d" satisfied condition "Succeeded or Failed" Aug 21 05:59:40.218: INFO: Trying to get logs from node kali-worker pod pod-secrets-eaa9aee4-edea-4c6e-9056-506b9013594d container secret-volume-test: STEP: delete the pod Aug 21 05:59:40.254: INFO: Waiting for pod pod-secrets-eaa9aee4-edea-4c6e-9056-506b9013594d to disappear Aug 21 05:59:40.263: INFO: Pod pod-secrets-eaa9aee4-edea-4c6e-9056-506b9013594d no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 05:59:40.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6392" for this suite. • [SLOW TEST:6.224 seconds] [sig-storage] Secrets /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":20,"skipped":300,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 05:59:40.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-d234a613-62a2-4c36-93f8-a3eb8f37512c STEP: Creating a pod to test consume configMaps Aug 21 05:59:40.384: INFO: Waiting up to 5m0s for pod "pod-configmaps-32de04b6-555a-4f03-b28d-2e2e5705db7c" in namespace "configmap-5609" to be "Succeeded or Failed" Aug 21 05:59:40.402: INFO: Pod "pod-configmaps-32de04b6-555a-4f03-b28d-2e2e5705db7c": Phase="Pending", Reason="", readiness=false. Elapsed: 17.787969ms Aug 21 05:59:42.410: INFO: Pod "pod-configmaps-32de04b6-555a-4f03-b28d-2e2e5705db7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025389766s Aug 21 05:59:44.417: INFO: Pod "pod-configmaps-32de04b6-555a-4f03-b28d-2e2e5705db7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03277361s STEP: Saw pod success Aug 21 05:59:44.418: INFO: Pod "pod-configmaps-32de04b6-555a-4f03-b28d-2e2e5705db7c" satisfied condition "Succeeded or Failed" Aug 21 05:59:44.423: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-32de04b6-555a-4f03-b28d-2e2e5705db7c container configmap-volume-test: STEP: delete the pod Aug 21 05:59:44.499: INFO: Waiting for pod pod-configmaps-32de04b6-555a-4f03-b28d-2e2e5705db7c to disappear Aug 21 05:59:44.509: INFO: Pod pod-configmaps-32de04b6-555a-4f03-b28d-2e2e5705db7c no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 05:59:44.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5609" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":21,"skipped":313,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 05:59:44.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap that has name configmap-test-emptyKey-93ae6ecb-bed4-423b-8faf-617bf02bbdd0 [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 05:59:44.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3208" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":22,"skipped":330,"failed":0} SS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 05:59:44.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should provide secure master service [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-network] Services /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 05:59:44.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1336" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":275,"completed":23,"skipped":332,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 05:59:44.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Aug 21 05:59:44.803: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Aug 21 06:00:59.348: INFO: >>> kubeConfig: /root/.kube/config Aug 21 06:01:09.133: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:01:56.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3714" for this suite. • [SLOW TEST:132.257 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":24,"skipped":343,"failed":0} SSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:01:56.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test hostPath mode Aug 21 06:01:57.078: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2333" to be "Succeeded or Failed" Aug 21 06:01:57.096: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 18.003837ms Aug 21 06:01:59.103: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02414847s Aug 21 06:02:01.128: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049452945s Aug 21 06:02:03.135: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.056978072s STEP: Saw pod success Aug 21 06:02:03.136: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Aug 21 06:02:03.169: INFO: Trying to get logs from node kali-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Aug 21 06:02:03.230: INFO: Waiting for pod pod-host-path-test to disappear Aug 21 06:02:03.266: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:02:03.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-2333" for this suite. • [SLOW TEST:6.339 seconds] [sig-storage] HostPath /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":25,"skipped":348,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:02:03.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 21 06:02:11.477: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 21 06:02:13.495: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733586531, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733586531, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733586531, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733586531, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 21 06:02:16.543: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:02:17.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7912" for this suite. STEP: Destroying namespace "webhook-7912-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.930 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":26,"skipped":360,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:02:17.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 21 06:02:17.989: INFO: Creating ReplicaSet my-hostname-basic-9539e68e-6242-417f-a447-84b1d1803ae8 Aug 21 06:02:18.772: INFO: Pod name my-hostname-basic-9539e68e-6242-417f-a447-84b1d1803ae8: Found 0 pods out of 1 Aug 21 06:02:23.781: INFO: Pod name my-hostname-basic-9539e68e-6242-417f-a447-84b1d1803ae8: Found 1 pods out of 1 Aug 21 06:02:23.781: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-9539e68e-6242-417f-a447-84b1d1803ae8" is running Aug 21 06:02:23.786: INFO: Pod "my-hostname-basic-9539e68e-6242-417f-a447-84b1d1803ae8-ds2pv" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 06:02:19 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 06:02:23 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 06:02:23 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 06:02:18 +0000 UTC Reason: Message:}]) Aug 21 06:02:23.787: INFO: Trying to dial the pod Aug 21 06:02:28.809: INFO: Controller my-hostname-basic-9539e68e-6242-417f-a447-84b1d1803ae8: Got expected result from replica 1 [my-hostname-basic-9539e68e-6242-417f-a447-84b1d1803ae8-ds2pv]: "my-hostname-basic-9539e68e-6242-417f-a447-84b1d1803ae8-ds2pv", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:02:28.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9021" for this suite. • [SLOW TEST:11.601 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":27,"skipped":378,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:02:28.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Aug 21 06:02:35.653: INFO: Successfully updated pod "labelsupdate02bb0aa3-bda2-4e02-976c-3f090d968291" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:02:37.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3421" for this suite. • [SLOW TEST:8.874 seconds] [sig-storage] Downward API volume /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":28,"skipped":391,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:02:37.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 21 06:02:37.805: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Aug 21 06:02:40.061: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:02:40.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5361" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":29,"skipped":414,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:02:40.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 21 06:02:40.384: INFO: Waiting up to 5m0s for pod "pod-2a62dc3d-bee1-4c6d-9123-d41fe43222f9" in namespace "emptydir-7218" to be "Succeeded or Failed" Aug 21 06:02:40.408: INFO: Pod "pod-2a62dc3d-bee1-4c6d-9123-d41fe43222f9": Phase="Pending", Reason="", readiness=false. Elapsed: 23.123667ms Aug 21 06:02:42.519: INFO: Pod "pod-2a62dc3d-bee1-4c6d-9123-d41fe43222f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134313898s Aug 21 06:02:44.526: INFO: Pod "pod-2a62dc3d-bee1-4c6d-9123-d41fe43222f9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141571719s Aug 21 06:02:46.615: INFO: Pod "pod-2a62dc3d-bee1-4c6d-9123-d41fe43222f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.230333469s STEP: Saw pod success Aug 21 06:02:46.615: INFO: Pod "pod-2a62dc3d-bee1-4c6d-9123-d41fe43222f9" satisfied condition "Succeeded or Failed" Aug 21 06:02:46.641: INFO: Trying to get logs from node kali-worker2 pod pod-2a62dc3d-bee1-4c6d-9123-d41fe43222f9 container test-container: STEP: delete the pod Aug 21 06:02:46.691: INFO: Waiting for pod pod-2a62dc3d-bee1-4c6d-9123-d41fe43222f9 to disappear Aug 21 06:02:46.694: INFO: Pod pod-2a62dc3d-bee1-4c6d-9123-d41fe43222f9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:02:46.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7218" for this suite. • [SLOW TEST:6.436 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":30,"skipped":422,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:02:46.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 21 06:02:47.158: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-7952f273-5e8a-452d-b050-cfe3487bb452" in namespace "security-context-test-7795" to be "Succeeded or Failed" Aug 21 06:02:47.962: INFO: Pod "alpine-nnp-false-7952f273-5e8a-452d-b050-cfe3487bb452": Phase="Pending", Reason="", readiness=false. Elapsed: 802.948574ms Aug 21 06:02:49.968: INFO: Pod "alpine-nnp-false-7952f273-5e8a-452d-b050-cfe3487bb452": Phase="Pending", Reason="", readiness=false. Elapsed: 2.809367237s Aug 21 06:02:52.604: INFO: Pod "alpine-nnp-false-7952f273-5e8a-452d-b050-cfe3487bb452": Phase="Succeeded", Reason="", readiness=false. Elapsed: 5.445155328s Aug 21 06:02:52.604: INFO: Pod "alpine-nnp-false-7952f273-5e8a-452d-b050-cfe3487bb452" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:02:52.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7795" for this suite. • [SLOW TEST:5.943 seconds] [k8s.io] Security Context /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when creating containers with AllowPrivilegeEscalation /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":31,"skipped":444,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:02:52.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 21 06:02:52.833: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"6cdb3c2e-f487-417d-8c6b-e7fdb8b1eeeb", Controller:(*bool)(0x92a534a), BlockOwnerDeletion:(*bool)(0x92a534b)}} Aug 21 06:02:52.952: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"57a56536-0a68-45a9-8cad-d60e5276d903", Controller:(*bool)(0x92afc02), BlockOwnerDeletion:(*bool)(0x92afc03)}} Aug 21 06:02:52.994: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"01dc199a-7a53-42aa-9f01-8592165c6e1d", Controller:(*bool)(0x92afd9a), BlockOwnerDeletion:(*bool)(0x92afd9b)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:02:58.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9649" for this suite. • [SLOW TEST:5.423 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":32,"skipped":445,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:02:58.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on node default medium Aug 21 06:02:58.208: INFO: Waiting up to 5m0s for pod "pod-b14667f2-1175-4599-bd4d-23ac83849aa7" in namespace "emptydir-1008" to be "Succeeded or Failed" Aug 21 06:02:58.246: INFO: Pod "pod-b14667f2-1175-4599-bd4d-23ac83849aa7": Phase="Pending", Reason="", readiness=false. Elapsed: 37.956387ms Aug 21 06:03:00.253: INFO: Pod "pod-b14667f2-1175-4599-bd4d-23ac83849aa7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04482681s Aug 21 06:03:02.260: INFO: Pod "pod-b14667f2-1175-4599-bd4d-23ac83849aa7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051940906s STEP: Saw pod success Aug 21 06:03:02.260: INFO: Pod "pod-b14667f2-1175-4599-bd4d-23ac83849aa7" satisfied condition "Succeeded or Failed" Aug 21 06:03:02.266: INFO: Trying to get logs from node kali-worker2 pod pod-b14667f2-1175-4599-bd4d-23ac83849aa7 container test-container: STEP: delete the pod Aug 21 06:03:02.330: INFO: Waiting for pod pod-b14667f2-1175-4599-bd4d-23ac83849aa7 to disappear Aug 21 06:03:02.337: INFO: Pod pod-b14667f2-1175-4599-bd4d-23ac83849aa7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:03:02.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1008" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":33,"skipped":447,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:03:02.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-29643995-bcdd-4e5a-aea6-de55569e6213 in namespace container-probe-4413 Aug 21 06:03:06.493: INFO: Started pod busybox-29643995-bcdd-4e5a-aea6-de55569e6213 in namespace container-probe-4413 STEP: checking the pod's current state and verifying that restartCount is present Aug 21 06:03:06.498: INFO: Initial restart count of pod busybox-29643995-bcdd-4e5a-aea6-de55569e6213 is 0 Aug 21 06:03:56.699: INFO: Restart count of pod container-probe-4413/busybox-29643995-bcdd-4e5a-aea6-de55569e6213 is now 1 (50.200563121s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:03:56.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4413" for this suite. • [SLOW TEST:54.451 seconds] [k8s.io] Probing container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":34,"skipped":457,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:03:56.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-2500 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 21 06:03:56.866: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 21 06:03:56.979: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 21 06:03:59.060: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 21 06:04:01.017: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 21 06:04:02.986: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 21 06:04:04.987: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 21 06:04:06.987: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 21 06:04:08.985: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 21 06:04:10.986: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 21 06:04:12.986: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 21 06:04:13.034: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 21 06:04:15.043: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 21 06:04:17.043: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 21 06:04:19.041: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 21 06:04:21.043: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 21 06:04:23.047: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 21 06:04:29.116: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.98:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2500 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 06:04:29.116: INFO: >>> kubeConfig: /root/.kube/config I0821 06:04:29.227306 10 log.go:172] (0x88be5b0) (0x88be620) Create stream I0821 06:04:29.227459 10 log.go:172] (0x88be5b0) (0x88be620) Stream added, broadcasting: 1 I0821 06:04:29.231130 10 log.go:172] (0x88be5b0) Reply frame received for 1 I0821 06:04:29.231382 10 log.go:172] (0x88be5b0) (0x88be7e0) Create stream I0821 06:04:29.231527 10 log.go:172] (0x88be5b0) (0x88be7e0) Stream added, broadcasting: 3 I0821 06:04:29.233337 10 log.go:172] (0x88be5b0) Reply frame received for 3 I0821 06:04:29.233543 10 log.go:172] (0x88be5b0) (0x8434c40) Create stream I0821 06:04:29.233643 10 log.go:172] (0x88be5b0) (0x8434c40) Stream added, broadcasting: 5 I0821 06:04:29.235093 10 log.go:172] (0x88be5b0) Reply frame received for 5 I0821 06:04:29.312350 10 log.go:172] (0x88be5b0) Data frame received for 3 I0821 06:04:29.312585 10 log.go:172] (0x88be7e0) (3) Data frame handling I0821 06:04:29.312872 10 log.go:172] (0x88be7e0) (3) Data frame sent I0821 06:04:29.313058 10 log.go:172] (0x88be5b0) Data frame received for 3 I0821 06:04:29.313245 10 log.go:172] (0x88be7e0) (3) Data frame handling I0821 06:04:29.313473 10 log.go:172] (0x88be5b0) Data frame received for 5 I0821 06:04:29.313614 10 log.go:172] (0x8434c40) (5) Data frame handling I0821 06:04:29.314661 10 log.go:172] (0x88be5b0) Data frame received for 1 I0821 06:04:29.314774 10 log.go:172] (0x88be620) (1) Data frame handling I0821 06:04:29.314908 10 log.go:172] (0x88be620) (1) Data frame sent I0821 06:04:29.315051 10 log.go:172] (0x88be5b0) (0x88be620) Stream removed, broadcasting: 1 I0821 06:04:29.315181 10 log.go:172] (0x88be5b0) Go away received I0821 06:04:29.315633 10 log.go:172] (0x88be5b0) (0x88be620) Stream removed, broadcasting: 1 I0821 06:04:29.315789 10 log.go:172] (0x88be5b0) (0x88be7e0) Stream removed, broadcasting: 3 I0821 06:04:29.315904 10 log.go:172] (0x88be5b0) (0x8434c40) Stream removed, broadcasting: 5 Aug 21 06:04:29.316: INFO: Found all expected endpoints: [netserver-0] Aug 21 06:04:29.321: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.106:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2500 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 06:04:29.321: INFO: >>> kubeConfig: /root/.kube/config I0821 06:04:29.423617 10 log.go:172] (0x88649a0) (0x8864a80) Create stream I0821 06:04:29.423722 10 log.go:172] (0x88649a0) (0x8864a80) Stream added, broadcasting: 1 I0821 06:04:29.426711 10 log.go:172] (0x88649a0) Reply frame received for 1 I0821 06:04:29.426916 10 log.go:172] (0x88649a0) (0x84353b0) Create stream I0821 06:04:29.427034 10 log.go:172] (0x88649a0) (0x84353b0) Stream added, broadcasting: 3 I0821 06:04:29.428500 10 log.go:172] (0x88649a0) Reply frame received for 3 I0821 06:04:29.428621 10 log.go:172] (0x88649a0) (0x8435570) Create stream I0821 06:04:29.428684 10 log.go:172] (0x88649a0) (0x8435570) Stream added, broadcasting: 5 I0821 06:04:29.429832 10 log.go:172] (0x88649a0) Reply frame received for 5 I0821 06:04:29.518561 10 log.go:172] (0x88649a0) Data frame received for 3 I0821 06:04:29.518799 10 log.go:172] (0x84353b0) (3) Data frame handling I0821 06:04:29.518925 10 log.go:172] (0x88649a0) Data frame received for 5 I0821 06:04:29.519094 10 log.go:172] (0x8435570) (5) Data frame handling I0821 06:04:29.519284 10 log.go:172] (0x84353b0) (3) Data frame sent I0821 06:04:29.519490 10 log.go:172] (0x88649a0) Data frame received for 3 I0821 06:04:29.519622 10 log.go:172] (0x84353b0) (3) Data frame handling I0821 06:04:29.519764 10 log.go:172] (0x88649a0) Data frame received for 1 I0821 06:04:29.519875 10 log.go:172] (0x8864a80) (1) Data frame handling I0821 06:04:29.520003 10 log.go:172] (0x8864a80) (1) Data frame sent I0821 06:04:29.520163 10 log.go:172] (0x88649a0) (0x8864a80) Stream removed, broadcasting: 1 I0821 06:04:29.520327 10 log.go:172] (0x88649a0) Go away received I0821 06:04:29.521064 10 log.go:172] (0x88649a0) (0x8864a80) Stream removed, broadcasting: 1 I0821 06:04:29.521250 10 log.go:172] (0x88649a0) (0x84353b0) Stream removed, broadcasting: 3 I0821 06:04:29.521381 10 log.go:172] (0x88649a0) (0x8435570) Stream removed, broadcasting: 5 Aug 21 06:04:29.521: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:04:29.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2500" for this suite. • [SLOW TEST:32.761 seconds] [sig-network] Networking /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":35,"skipped":495,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:04:29.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:04:29.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2101" for this suite. STEP: Destroying namespace "nspatchtest-3e1da4ec-07d2-4e3b-85eb-ccf4d311d2d8-1961" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":36,"skipped":524,"failed":0} ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:04:29.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Aug 21 06:04:39.976: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 21 06:04:39.999: INFO: Pod pod-with-prestop-http-hook still exists Aug 21 06:04:42.000: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 21 06:04:42.007: INFO: Pod pod-with-prestop-http-hook still exists Aug 21 06:04:44.000: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 21 06:04:44.005: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:04:44.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8211" for this suite. • [SLOW TEST:14.258 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":37,"skipped":524,"failed":0} S ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:04:44.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service endpoint-test2 in namespace services-4497 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4497 to expose endpoints map[] Aug 21 06:04:44.178: INFO: successfully validated that service endpoint-test2 in namespace services-4497 exposes endpoints map[] (10.861462ms elapsed) STEP: Creating pod pod1 in namespace services-4497 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4497 to expose endpoints map[pod1:[80]] Aug 21 06:04:47.361: INFO: successfully validated that service endpoint-test2 in namespace services-4497 exposes endpoints map[pod1:[80]] (3.12603766s elapsed) STEP: Creating pod pod2 in namespace services-4497 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4497 to expose endpoints map[pod1:[80] pod2:[80]] Aug 21 06:04:51.615: INFO: successfully validated that service endpoint-test2 in namespace services-4497 exposes endpoints map[pod1:[80] pod2:[80]] (4.225681925s elapsed) STEP: Deleting pod pod1 in namespace services-4497 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4497 to expose endpoints map[pod2:[80]] Aug 21 06:04:51.673: INFO: successfully validated that service endpoint-test2 in namespace services-4497 exposes endpoints map[pod2:[80]] (51.786644ms elapsed) STEP: Deleting pod pod2 in namespace services-4497 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4497 to expose endpoints map[] Aug 21 06:04:51.732: INFO: successfully validated that service endpoint-test2 in namespace services-4497 exposes endpoints map[] (30.582836ms elapsed) [AfterEach] [sig-network] Services /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:04:52.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4497" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:8.030 seconds] [sig-network] Services /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":275,"completed":38,"skipped":525,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:04:52.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-6018 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Aug 21 06:04:52.243: INFO: Found 0 stateful pods, waiting for 3 Aug 21 06:05:02.709: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 21 06:05:02.709: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 21 06:05:02.709: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Aug 21 06:05:12.253: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 21 06:05:12.253: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 21 06:05:12.253: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Aug 21 06:05:12.298: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Aug 21 06:05:22.365: INFO: Updating stateful set ss2 Aug 21 06:05:22.396: INFO: Waiting for Pod statefulset-6018/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Aug 21 06:05:33.020: INFO: Found 2 stateful pods, waiting for 3 Aug 21 06:05:43.031: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 21 06:05:43.031: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 21 06:05:43.031: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Aug 21 06:05:43.102: INFO: Updating stateful set ss2 Aug 21 06:05:43.180: INFO: Waiting for Pod statefulset-6018/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 21 06:05:53.812: INFO: Updating stateful set ss2 Aug 21 06:05:53.939: INFO: Waiting for StatefulSet statefulset-6018/ss2 to complete update Aug 21 06:05:53.940: INFO: Waiting for Pod statefulset-6018/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Aug 21 06:06:03.959: INFO: Deleting all statefulset in ns statefulset-6018 Aug 21 06:06:03.965: INFO: Scaling statefulset ss2 to 0 Aug 21 06:06:24.000: INFO: Waiting for statefulset status.replicas updated to 0 Aug 21 06:06:24.028: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:06:24.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6018" for this suite. • [SLOW TEST:92.199 seconds] [sig-apps] StatefulSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":39,"skipped":527,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:06:24.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Aug 21 06:06:29.013: INFO: Successfully updated pod "annotationupdate8e93debf-7493-4417-a06e-c422c2d87f42" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:06:31.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9299" for this suite. • [SLOW TEST:6.797 seconds] [sig-storage] Downward API volume /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":40,"skipped":533,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:06:31.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:06:35.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2072" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":41,"skipped":542,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:06:35.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Probing container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:07:35.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8160" for this suite. • [SLOW TEST:60.130 seconds] [k8s.io] Probing container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":42,"skipped":549,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:07:35.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-e683ef82-2bf4-4c8a-a9f4-bec17d52a02b STEP: Creating the pod STEP: Updating configmap configmap-test-upd-e683ef82-2bf4-4c8a-a9f4-bec17d52a02b STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:07:41.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8331" for this suite. • [SLOW TEST:6.272 seconds] [sig-storage] ConfigMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":43,"skipped":594,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:07:41.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should scale a replication controller [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Aug 21 06:07:41.743: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1334' Aug 21 06:07:43.317: INFO: stderr: "" Aug 21 06:07:43.318: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 21 06:07:43.318: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1334' Aug 21 06:07:44.472: INFO: stderr: "" Aug 21 06:07:44.472: INFO: stdout: "update-demo-nautilus-g9xkw update-demo-nautilus-nklfm " Aug 21 06:07:44.473: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g9xkw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1334' Aug 21 06:07:45.572: INFO: stderr: "" Aug 21 06:07:45.572: INFO: stdout: "" Aug 21 06:07:45.572: INFO: update-demo-nautilus-g9xkw is created but not running Aug 21 06:07:50.573: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1334' Aug 21 06:07:51.734: INFO: stderr: "" Aug 21 06:07:51.734: INFO: stdout: "update-demo-nautilus-g9xkw update-demo-nautilus-nklfm " Aug 21 06:07:51.734: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g9xkw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1334' Aug 21 06:07:52.831: INFO: stderr: "" Aug 21 06:07:52.831: INFO: stdout: "true" Aug 21 06:07:52.832: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g9xkw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1334' Aug 21 06:07:53.939: INFO: stderr: "" Aug 21 06:07:53.939: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 21 06:07:53.940: INFO: validating pod update-demo-nautilus-g9xkw Aug 21 06:07:53.947: INFO: got data: { "image": "nautilus.jpg" } Aug 21 06:07:53.948: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 21 06:07:53.948: INFO: update-demo-nautilus-g9xkw is verified up and running Aug 21 06:07:53.949: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nklfm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1334' Aug 21 06:07:55.098: INFO: stderr: "" Aug 21 06:07:55.098: INFO: stdout: "true" Aug 21 06:07:55.098: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nklfm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1334' Aug 21 06:07:56.213: INFO: stderr: "" Aug 21 06:07:56.213: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 21 06:07:56.213: INFO: validating pod update-demo-nautilus-nklfm Aug 21 06:07:56.235: INFO: got data: { "image": "nautilus.jpg" } Aug 21 06:07:56.235: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 21 06:07:56.235: INFO: update-demo-nautilus-nklfm is verified up and running STEP: scaling down the replication controller Aug 21 06:07:56.249: INFO: scanned /root for discovery docs: Aug 21 06:07:56.249: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-1334' Aug 21 06:07:58.430: INFO: stderr: "" Aug 21 06:07:58.430: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 21 06:07:58.431: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1334' Aug 21 06:07:59.695: INFO: stderr: "" Aug 21 06:07:59.695: INFO: stdout: "update-demo-nautilus-g9xkw update-demo-nautilus-nklfm " STEP: Replicas for name=update-demo: expected=1 actual=2 Aug 21 06:08:04.696: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1334' Aug 21 06:08:05.867: INFO: stderr: "" Aug 21 06:08:05.867: INFO: stdout: "update-demo-nautilus-g9xkw update-demo-nautilus-nklfm " STEP: Replicas for name=update-demo: expected=1 actual=2 Aug 21 06:08:10.868: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1334' Aug 21 06:08:12.008: INFO: stderr: "" Aug 21 06:08:12.008: INFO: stdout: "update-demo-nautilus-nklfm " Aug 21 06:08:12.008: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nklfm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1334' Aug 21 06:08:13.157: INFO: stderr: "" Aug 21 06:08:13.157: INFO: stdout: "true" Aug 21 06:08:13.158: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nklfm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1334' Aug 21 06:08:14.251: INFO: stderr: "" Aug 21 06:08:14.251: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 21 06:08:14.251: INFO: validating pod update-demo-nautilus-nklfm Aug 21 06:08:14.256: INFO: got data: { "image": "nautilus.jpg" } Aug 21 06:08:14.256: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 21 06:08:14.256: INFO: update-demo-nautilus-nklfm is verified up and running STEP: scaling up the replication controller Aug 21 06:08:14.265: INFO: scanned /root for discovery docs: Aug 21 06:08:14.266: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-1334' Aug 21 06:08:15.511: INFO: stderr: "" Aug 21 06:08:15.512: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 21 06:08:15.512: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1334' Aug 21 06:08:16.663: INFO: stderr: "" Aug 21 06:08:16.664: INFO: stdout: "update-demo-nautilus-25xpw update-demo-nautilus-nklfm " Aug 21 06:08:16.664: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-25xpw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1334' Aug 21 06:08:18.133: INFO: stderr: "" Aug 21 06:08:18.133: INFO: stdout: "" Aug 21 06:08:18.133: INFO: update-demo-nautilus-25xpw is created but not running Aug 21 06:08:23.134: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1334' Aug 21 06:08:28.121: INFO: stderr: "" Aug 21 06:08:28.121: INFO: stdout: "update-demo-nautilus-25xpw update-demo-nautilus-nklfm " Aug 21 06:08:28.122: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-25xpw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1334' Aug 21 06:08:29.253: INFO: stderr: "" Aug 21 06:08:29.253: INFO: stdout: "true" Aug 21 06:08:29.254: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-25xpw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1334' Aug 21 06:08:30.487: INFO: stderr: "" Aug 21 06:08:30.487: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 21 06:08:30.488: INFO: validating pod update-demo-nautilus-25xpw Aug 21 06:08:30.505: INFO: got data: { "image": "nautilus.jpg" } Aug 21 06:08:30.506: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 21 06:08:30.506: INFO: update-demo-nautilus-25xpw is verified up and running Aug 21 06:08:30.506: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nklfm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1334' Aug 21 06:08:31.630: INFO: stderr: "" Aug 21 06:08:31.630: INFO: stdout: "true" Aug 21 06:08:31.631: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nklfm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1334' Aug 21 06:08:32.734: INFO: stderr: "" Aug 21 06:08:32.734: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 21 06:08:32.734: INFO: validating pod update-demo-nautilus-nklfm Aug 21 06:08:32.739: INFO: got data: { "image": "nautilus.jpg" } Aug 21 06:08:32.739: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 21 06:08:32.739: INFO: update-demo-nautilus-nklfm is verified up and running STEP: using delete to clean up resources Aug 21 06:08:32.739: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1334' Aug 21 06:08:33.808: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 21 06:08:33.809: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 21 06:08:33.809: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1334' Aug 21 06:08:35.039: INFO: stderr: "No resources found in kubectl-1334 namespace.\n" Aug 21 06:08:35.039: INFO: stdout: "" Aug 21 06:08:35.039: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1334 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 21 06:08:36.176: INFO: stderr: "" Aug 21 06:08:36.177: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:08:36.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1334" for this suite. • [SLOW TEST:54.540 seconds] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should scale a replication controller [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":275,"completed":44,"skipped":605,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:08:36.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Aug 21 06:08:52.052: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Aug 21 06:08:54.074: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733586932, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733586932, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733586932, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733586932, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 21 06:08:57.130: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 21 06:08:57.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:08:58.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-5147" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:22.441 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":45,"skipped":608,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:08:58.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-405 [It] should have a working scale subresource [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating statefulset ss in namespace statefulset-405 Aug 21 06:08:59.374: INFO: Found 0 stateful pods, waiting for 1 Aug 21 06:09:09.398: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Aug 21 06:09:09.427: INFO: Deleting all statefulset in ns statefulset-405 Aug 21 06:09:09.477: INFO: Scaling statefulset ss to 0 Aug 21 06:09:19.607: INFO: Waiting for statefulset status.replicas updated to 0 Aug 21 06:09:19.612: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:09:19.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-405" for this suite. • [SLOW TEST:21.007 seconds] [sig-apps] StatefulSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have a working scale subresource [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":46,"skipped":611,"failed":0} SSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:09:19.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 21 06:09:19.733: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:09:23.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8876" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":47,"skipped":617,"failed":0} S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:09:23.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:09:30.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2941" for this suite. STEP: Destroying namespace "nsdeletetest-3055" for this suite. Aug 21 06:09:30.370: INFO: Namespace nsdeletetest-3055 was already deleted STEP: Destroying namespace "nsdeletetest-7754" for this suite. • [SLOW TEST:6.386 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":48,"skipped":618,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:09:30.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-821 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 21 06:09:30.436: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 21 06:09:30.568: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 21 06:09:32.662: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 21 06:09:34.627: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 21 06:09:36.577: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 21 06:09:38.576: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 21 06:09:40.574: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 21 06:09:42.576: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 21 06:09:44.576: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 21 06:09:46.577: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 21 06:09:48.577: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 21 06:09:50.614: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 21 06:09:52.576: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 21 06:09:54.576: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 21 06:09:54.587: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 21 06:10:02.657: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.118 8081 | grep -v '^\s*$'] Namespace:pod-network-test-821 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 06:10:02.658: INFO: >>> kubeConfig: /root/.kube/config I0821 06:10:02.761445 10 log.go:172] (0x8434770) (0x84347e0) Create stream I0821 06:10:02.761622 10 log.go:172] (0x8434770) (0x84347e0) Stream added, broadcasting: 1 I0821 06:10:02.765980 10 log.go:172] (0x8434770) Reply frame received for 1 I0821 06:10:02.766107 10 log.go:172] (0x8434770) (0x9f78460) Create stream I0821 06:10:02.766174 10 log.go:172] (0x8434770) (0x9f78460) Stream added, broadcasting: 3 I0821 06:10:02.767239 10 log.go:172] (0x8434770) Reply frame received for 3 I0821 06:10:02.767398 10 log.go:172] (0x8434770) (0x8435030) Create stream I0821 06:10:02.767499 10 log.go:172] (0x8434770) (0x8435030) Stream added, broadcasting: 5 I0821 06:10:02.768918 10 log.go:172] (0x8434770) Reply frame received for 5 I0821 06:10:03.885088 10 log.go:172] (0x8434770) Data frame received for 3 I0821 06:10:03.885388 10 log.go:172] (0x9f78460) (3) Data frame handling I0821 06:10:03.885581 10 log.go:172] (0x9f78460) (3) Data frame sent I0821 06:10:03.885796 10 log.go:172] (0x8434770) Data frame received for 3 I0821 06:10:03.885966 10 log.go:172] (0x9f78460) (3) Data frame handling I0821 06:10:03.886199 10 log.go:172] (0x8434770) Data frame received for 5 I0821 06:10:03.886322 10 log.go:172] (0x8435030) (5) Data frame handling I0821 06:10:03.887712 10 log.go:172] (0x8434770) Data frame received for 1 I0821 06:10:03.887824 10 log.go:172] (0x84347e0) (1) Data frame handling I0821 06:10:03.887935 10 log.go:172] (0x84347e0) (1) Data frame sent I0821 06:10:03.888055 10 log.go:172] (0x8434770) (0x84347e0) Stream removed, broadcasting: 1 I0821 06:10:03.888196 10 log.go:172] (0x8434770) Go away received I0821 06:10:03.888860 10 log.go:172] (0x8434770) (0x84347e0) Stream removed, broadcasting: 1 I0821 06:10:03.889072 10 log.go:172] (0x8434770) (0x9f78460) Stream removed, broadcasting: 3 I0821 06:10:03.889203 10 log.go:172] (0x8434770) (0x8435030) Stream removed, broadcasting: 5 Aug 21 06:10:03.889: INFO: Found all expected endpoints: [netserver-0] Aug 21 06:10:03.895: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.126 8081 | grep -v '^\s*$'] Namespace:pod-network-test-821 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 06:10:03.896: INFO: >>> kubeConfig: /root/.kube/config I0821 06:10:04.002462 10 log.go:172] (0x8435f80) (0x772c000) Create stream I0821 06:10:04.002697 10 log.go:172] (0x8435f80) (0x772c000) Stream added, broadcasting: 1 I0821 06:10:04.008303 10 log.go:172] (0x8435f80) Reply frame received for 1 I0821 06:10:04.008508 10 log.go:172] (0x8435f80) (0x9f79500) Create stream I0821 06:10:04.008584 10 log.go:172] (0x8435f80) (0x9f79500) Stream added, broadcasting: 3 I0821 06:10:04.010104 10 log.go:172] (0x8435f80) Reply frame received for 3 I0821 06:10:04.010258 10 log.go:172] (0x8435f80) (0x9f79880) Create stream I0821 06:10:04.010327 10 log.go:172] (0x8435f80) (0x9f79880) Stream added, broadcasting: 5 I0821 06:10:04.011668 10 log.go:172] (0x8435f80) Reply frame received for 5 I0821 06:10:05.113283 10 log.go:172] (0x8435f80) Data frame received for 3 I0821 06:10:05.113541 10 log.go:172] (0x9f79500) (3) Data frame handling I0821 06:10:05.113699 10 log.go:172] (0x9f79500) (3) Data frame sent I0821 06:10:05.113821 10 log.go:172] (0x8435f80) Data frame received for 3 I0821 06:10:05.113929 10 log.go:172] (0x9f79500) (3) Data frame handling I0821 06:10:05.114077 10 log.go:172] (0x8435f80) Data frame received for 5 I0821 06:10:05.114149 10 log.go:172] (0x9f79880) (5) Data frame handling I0821 06:10:05.114715 10 log.go:172] (0x8435f80) Data frame received for 1 I0821 06:10:05.114876 10 log.go:172] (0x772c000) (1) Data frame handling I0821 06:10:05.115057 10 log.go:172] (0x772c000) (1) Data frame sent I0821 06:10:05.115194 10 log.go:172] (0x8435f80) (0x772c000) Stream removed, broadcasting: 1 I0821 06:10:05.115414 10 log.go:172] (0x8435f80) Go away received I0821 06:10:05.116276 10 log.go:172] (0x8435f80) (0x772c000) Stream removed, broadcasting: 1 I0821 06:10:05.116561 10 log.go:172] (0x8435f80) (0x9f79500) Stream removed, broadcasting: 3 I0821 06:10:05.116889 10 log.go:172] (0x8435f80) (0x9f79880) Stream removed, broadcasting: 5 Aug 21 06:10:05.117: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:10:05.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-821" for this suite. • [SLOW TEST:34.764 seconds] [sig-network] Networking /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":49,"skipped":642,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:10:05.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-6209 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6209 STEP: creating replication controller externalsvc in namespace services-6209 I0821 06:10:05.359604 10 runners.go:190] Created replication controller with name: externalsvc, namespace: services-6209, replica count: 2 I0821 06:10:08.414809 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0821 06:10:11.417133 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0821 06:10:14.418296 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Aug 21 06:10:14.495: INFO: Creating new exec pod Aug 21 06:10:18.519: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=services-6209 execpodqf4ff -- /bin/sh -x -c nslookup clusterip-service' Aug 21 06:10:20.002: INFO: stderr: "I0821 06:10:19.851737 1273 log.go:172] (0x2f64d20) (0x2f64d90) Create stream\nI0821 06:10:19.854099 1273 log.go:172] (0x2f64d20) (0x2f64d90) Stream added, broadcasting: 1\nI0821 06:10:19.865992 1273 log.go:172] (0x2f64d20) Reply frame received for 1\nI0821 06:10:19.866755 1273 log.go:172] (0x2f64d20) (0x28a68c0) Create stream\nI0821 06:10:19.866850 1273 log.go:172] (0x2f64d20) (0x28a68c0) Stream added, broadcasting: 3\nI0821 06:10:19.868712 1273 log.go:172] (0x2f64d20) Reply frame received for 3\nI0821 06:10:19.869355 1273 log.go:172] (0x2f64d20) (0x29ea3f0) Create stream\nI0821 06:10:19.869548 1273 log.go:172] (0x2f64d20) (0x29ea3f0) Stream added, broadcasting: 5\nI0821 06:10:19.871197 1273 log.go:172] (0x2f64d20) Reply frame received for 5\nI0821 06:10:19.945715 1273 log.go:172] (0x2f64d20) Data frame received for 5\nI0821 06:10:19.945952 1273 log.go:172] (0x29ea3f0) (5) Data frame handling\nI0821 06:10:19.946381 1273 log.go:172] (0x29ea3f0) (5) Data frame sent\n+ nslookup clusterip-service\nI0821 06:10:19.980975 1273 log.go:172] (0x2f64d20) Data frame received for 3\nI0821 06:10:19.981165 1273 log.go:172] (0x28a68c0) (3) Data frame handling\nI0821 06:10:19.981320 1273 log.go:172] (0x28a68c0) (3) Data frame sent\nI0821 06:10:19.982117 1273 log.go:172] (0x2f64d20) Data frame received for 3\nI0821 06:10:19.982243 1273 log.go:172] (0x28a68c0) (3) Data frame handling\nI0821 06:10:19.982363 1273 log.go:172] (0x28a68c0) (3) Data frame sent\nI0821 06:10:19.982565 1273 log.go:172] (0x2f64d20) Data frame received for 3\nI0821 06:10:19.982693 1273 log.go:172] (0x28a68c0) (3) Data frame handling\nI0821 06:10:19.983163 1273 log.go:172] (0x2f64d20) Data frame received for 5\nI0821 06:10:19.983279 1273 log.go:172] (0x29ea3f0) (5) Data frame handling\nI0821 06:10:19.985477 1273 log.go:172] (0x2f64d20) Data frame received for 1\nI0821 06:10:19.985590 1273 log.go:172] (0x2f64d90) (1) Data frame handling\nI0821 06:10:19.985691 1273 log.go:172] (0x2f64d90) (1) Data frame sent\nI0821 06:10:19.986214 1273 log.go:172] (0x2f64d20) (0x2f64d90) Stream removed, broadcasting: 1\nI0821 06:10:19.989041 1273 log.go:172] (0x2f64d20) Go away received\nI0821 06:10:19.991953 1273 log.go:172] (0x2f64d20) (0x2f64d90) Stream removed, broadcasting: 1\nI0821 06:10:19.992237 1273 log.go:172] (0x2f64d20) (0x28a68c0) Stream removed, broadcasting: 3\nI0821 06:10:19.992426 1273 log.go:172] (0x2f64d20) (0x29ea3f0) Stream removed, broadcasting: 5\n" Aug 21 06:10:20.003: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-6209.svc.cluster.local\tcanonical name = externalsvc.services-6209.svc.cluster.local.\nName:\texternalsvc.services-6209.svc.cluster.local\nAddress: 10.98.86.120\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6209, will wait for the garbage collector to delete the pods Aug 21 06:10:20.070: INFO: Deleting ReplicationController externalsvc took: 8.994387ms Aug 21 06:10:20.372: INFO: Terminating ReplicationController externalsvc pods took: 301.872234ms Aug 21 06:10:29.260: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:10:29.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6209" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:24.149 seconds] [sig-network] Services /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":50,"skipped":653,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:10:29.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 21 06:10:29.410: INFO: Creating deployment "webserver-deployment" Aug 21 06:10:29.416: INFO: Waiting for observed generation 1 Aug 21 06:10:31.592: INFO: Waiting for all required pods to come up Aug 21 06:10:31.621: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Aug 21 06:10:43.646: INFO: Waiting for deployment "webserver-deployment" to complete Aug 21 06:10:43.657: INFO: Updating deployment "webserver-deployment" with a non-existent image Aug 21 06:10:43.670: INFO: Updating deployment webserver-deployment Aug 21 06:10:43.671: INFO: Waiting for observed generation 2 Aug 21 06:10:45.880: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Aug 21 06:10:46.239: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Aug 21 06:10:46.484: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Aug 21 06:10:47.193: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Aug 21 06:10:47.193: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Aug 21 06:10:47.208: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Aug 21 06:10:47.216: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Aug 21 06:10:47.216: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Aug 21 06:10:47.227: INFO: Updating deployment webserver-deployment Aug 21 06:10:47.227: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Aug 21 06:10:47.362: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Aug 21 06:10:50.467: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Aug 21 06:10:51.418: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-7515 /apis/apps/v1/namespaces/deployment-7515/deployments/webserver-deployment 96884769-d7af-4bbb-8ac4-adf6676662ab 2014638 3 2020-08-21 06:10:29 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-08-21 06:10:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-21 06:10:47 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x8596188 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-21 06:10:47 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-08-21 06:10:47 +0000 UTC,LastTransitionTime:2020-08-21 06:10:29 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Aug 21 06:10:51.748: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4 deployment-7515 /apis/apps/v1/namespaces/deployment-7515/replicasets/webserver-deployment-6676bcd6d4 641f7216-3de5-4a74-9055-78340c4323bb 2014635 3 2020-08-21 06:10:43 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 96884769-d7af-4bbb-8ac4-adf6676662ab 0x92a54e7 0x92a54e8}] [] [{kube-controller-manager Update apps/v1 2020-08-21 06:10:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 54 56 56 52 55 54 57 45 100 55 97 102 45 52 98 98 98 45 56 97 99 52 45 97 100 102 54 54 55 54 54 54 50 97 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x92a5568 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 21 06:10:51.748: INFO: All old ReplicaSets of Deployment "webserver-deployment": Aug 21 06:10:51.750: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797 deployment-7515 /apis/apps/v1/namespaces/deployment-7515/replicasets/webserver-deployment-84855cf797 04ef2c78-ddc2-42ff-a5c2-5c14ed4dc6d2 2014619 3 2020-08-21 06:10:29 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 96884769-d7af-4bbb-8ac4-adf6676662ab 0x92a55c7 0x92a55c8}] [] [{kube-controller-manager Update apps/v1 2020-08-21 06:10:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 54 56 56 52 55 54 57 45 100 55 97 102 45 52 98 98 98 45 56 97 99 52 45 97 100 102 54 54 55 54 54 54 50 97 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x92a5638 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Aug 21 06:10:51.927: INFO: Pod "webserver-deployment-6676bcd6d4-2wwcg" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-2wwcg webserver-deployment-6676bcd6d4- deployment-7515 /api/v1/namespaces/deployment-7515/pods/webserver-deployment-6676bcd6d4-2wwcg 1e7285c7-90db-45bf-a558-638fa4d5789e 2014637 0 2020-08-21 06:10:47 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 641f7216-3de5-4a74-9055-78340c4323bb 0x92a5b47 0x92a5b48}] [] [{kube-controller-manager Update v1 2020-08-21 06:10:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 52 49 102 55 50 49 54 45 51 100 101 53 45 52 97 55 52 45 57 48 53 53 45 55 56 51 52 48 99 52 51 50 51 98 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:10:47 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2jqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2jqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2jqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-08-21 06:10:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 06:10:51.929: INFO: Pod "webserver-deployment-6676bcd6d4-4lfcg" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-4lfcg webserver-deployment-6676bcd6d4- deployment-7515 /api/v1/namespaces/deployment-7515/pods/webserver-deployment-6676bcd6d4-4lfcg b63cae42-9729-4eef-af2d-3e971b6a0895 2014544 0 2020-08-21 06:10:44 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 641f7216-3de5-4a74-9055-78340c4323bb 0x92a5cf7 0x92a5cf8}] [] [{kube-controller-manager Update v1 2020-08-21 06:10:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 52 49 102 55 50 49 54 45 51 100 101 53 45 52 97 55 52 45 57 48 53 53 45 55 56 51 52 48 99 52 51 50 51 98 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:10:45 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2jqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2jqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2jqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-08-21 06:10:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 06:10:51.931: INFO: Pod "webserver-deployment-6676bcd6d4-bhvjg" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-bhvjg webserver-deployment-6676bcd6d4- deployment-7515 /api/v1/namespaces/deployment-7515/pods/webserver-deployment-6676bcd6d4-bhvjg 760dbc13-eb03-4d84-b747-cddffe98369c 2014643 0 2020-08-21 06:10:47 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 641f7216-3de5-4a74-9055-78340c4323bb 0x92a5ea7 0x92a5ea8}] [] [{kube-controller-manager Update v1 2020-08-21 06:10:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 52 49 102 55 50 49 54 45 51 100 101 53 45 52 97 55 52 45 57 48 53 53 45 55 56 51 52 48 99 52 51 50 51 98 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:10:48 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2jqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2jqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2jqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-21 06:10:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 06:10:51.933: INFO: Pod "webserver-deployment-6676bcd6d4-d9wgr" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-d9wgr webserver-deployment-6676bcd6d4- deployment-7515 /api/v1/namespaces/deployment-7515/pods/webserver-deployment-6676bcd6d4-d9wgr 892ebddb-d573-4982-87e2-b0121f7478c2 2014537 0 2020-08-21 06:10:44 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 641f7216-3de5-4a74-9055-78340c4323bb 0x972e067 0x972e068}] [] [{kube-controller-manager Update v1 2020-08-21 06:10:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 52 49 102 55 50 49 54 45 51 100 101 53 45 52 97 55 52 45 57 48 53 53 45 55 56 51 52 48 99 52 51 50 51 98 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:10:44 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2jqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2jqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2jqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-08-21 06:10:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 06:10:51.935: INFO: Pod "webserver-deployment-6676bcd6d4-dcccg" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-dcccg webserver-deployment-6676bcd6d4- deployment-7515 /api/v1/namespaces/deployment-7515/pods/webserver-deployment-6676bcd6d4-dcccg 9419efa0-bd6c-4484-8476-edd54c631327 2014513 0 2020-08-21 06:10:43 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 641f7216-3de5-4a74-9055-78340c4323bb 0x972e227 0x972e228}] [] [{kube-controller-manager Update v1 2020-08-21 06:10:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 52 49 102 55 50 49 54 45 51 100 101 53 45 52 97 55 52 45 57 48 53 53 45 55 56 51 52 48 99 52 51 50 51 98 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:10:44 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2jqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2jqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2jqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-21 06:10:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 06:10:51.937: INFO: Pod "webserver-deployment-6676bcd6d4-jnktb" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-jnktb webserver-deployment-6676bcd6d4- deployment-7515 /api/v1/namespaces/deployment-7515/pods/webserver-deployment-6676bcd6d4-jnktb 599dfec1-85e1-4904-917e-bb2461d8f223 2014625 0 2020-08-21 06:10:47 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 641f7216-3de5-4a74-9055-78340c4323bb 0x972e3d7 0x972e3d8}] [] [{kube-controller-manager Update v1 2020-08-21 06:10:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 52 49 102 55 50 49 54 45 51 100 101 53 45 52 97 55 52 45 57 48 53 53 45 55 56 51 52 48 99 52 51 50 51 98 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:10:47 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2jqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2jqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2jqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-21 06:10:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 06:10:51.939: INFO: Pod "webserver-deployment-6676bcd6d4-nsm7w" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-nsm7w webserver-deployment-6676bcd6d4- deployment-7515 /api/v1/namespaces/deployment-7515/pods/webserver-deployment-6676bcd6d4-nsm7w 53e3e2c6-a7e5-4383-904b-e3ca9ee50746 2014710 0 2020-08-21 06:10:43 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 641f7216-3de5-4a74-9055-78340c4323bb 0x972e587 0x972e588}] [] [{kube-controller-manager Update v1 2020-08-21 06:10:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 52 49 102 55 50 49 54 45 51 100 101 53 45 52 97 55 52 45 57 48 53 53 45 55 56 51 52 48 99 52 51 50 51 98 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:10:51 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 51 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2jqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2jqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2jqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.137,StartTime:2020-08-21 06:10:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.137,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 06:10:51.941: INFO: Pod "webserver-deployment-6676bcd6d4-pbzrf" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-pbzrf webserver-deployment-6676bcd6d4- deployment-7515 /api/v1/namespaces/deployment-7515/pods/webserver-deployment-6676bcd6d4-pbzrf 8183eeed-386a-4ca4-ad3f-f005670af5ca 2014660 0 2020-08-21 06:10:47 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 641f7216-3de5-4a74-9055-78340c4323bb 0x972e767 0x972e768}] [] [{kube-controller-manager Update v1 2020-08-21 06:10:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 52 49 102 55 50 49 54 45 51 100 101 53 45 52 97 55 52 45 57 48 53 53 45 55 56 51 52 48 99 52 51 50 51 98 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:10:48 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2jqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2jqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2jqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-08-21 06:10:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 06:10:51.943: INFO: Pod "webserver-deployment-6676bcd6d4-qcfqm" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-qcfqm webserver-deployment-6676bcd6d4- deployment-7515 /api/v1/namespaces/deployment-7515/pods/webserver-deployment-6676bcd6d4-qcfqm a299d26e-1463-4211-a658-45bbd355cf4e 2014656 0 2020-08-21 06:10:47 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 641f7216-3de5-4a74-9055-78340c4323bb 0x972e917 0x972e918}] [] [{kube-controller-manager Update v1 2020-08-21 06:10:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 52 49 102 55 50 49 54 45 51 100 101 53 45 52 97 55 52 45 57 48 53 53 45 55 56 51 52 48 99 52 51 50 51 98 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:10:48 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2jqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2jqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2jqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-21 06:10:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 06:10:51.945: INFO: Pod "webserver-deployment-6676bcd6d4-qr4sc" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-qr4sc webserver-deployment-6676bcd6d4- deployment-7515 /api/v1/namespaces/deployment-7515/pods/webserver-deployment-6676bcd6d4-qr4sc 621df42e-09d3-41f8-8438-de977f1eef58 2014697 0 2020-08-21 06:10:47 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 641f7216-3de5-4a74-9055-78340c4323bb 0x972eac7 0x972eac8}] [] [{kube-controller-manager Update v1 2020-08-21 06:10:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 52 49 102 55 50 49 54 45 51 100 101 53 45 52 97 55 52 45 57 48 53 53 45 55 56 51 52 48 99 52 51 50 51 98 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:10:50 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2jqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2jqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2jqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-08-21 06:10:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 06:10:51.947: INFO: Pod "webserver-deployment-6676bcd6d4-rb7bv" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-rb7bv webserver-deployment-6676bcd6d4- deployment-7515 /api/v1/namespaces/deployment-7515/pods/webserver-deployment-6676bcd6d4-rb7bv 5f50a9b3-487a-4674-b597-c5e187620b92 2014698 0 2020-08-21 06:10:47 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 641f7216-3de5-4a74-9055-78340c4323bb 0x972ec77 0x972ec78}] [] [{kube-controller-manager Update v1 2020-08-21 06:10:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 52 49 102 55 50 49 54 45 51 100 101 53 45 52 97 55 52 45 57 48 53 53 45 55 56 51 52 48 99 52 51 50 51 98 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:10:50 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2jqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2jqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2jqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-21 06:10:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 06:10:51.949: INFO: Pod "webserver-deployment-6676bcd6d4-xd68h" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-xd68h webserver-deployment-6676bcd6d4- deployment-7515 /api/v1/namespaces/deployment-7515/pods/webserver-deployment-6676bcd6d4-xd68h 2d2de4d0-7734-4653-8cd7-2018fd67a85a 2014705 0 2020-08-21 06:10:43 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 641f7216-3de5-4a74-9055-78340c4323bb 0x972ee57 0x972ee58}] [] [{kube-controller-manager Update v1 2020-08-21 06:10:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 52 49 102 55 50 49 54 45 51 100 101 53 45 52 97 55 52 45 57 48 53 53 45 55 56 51 52 48 99 52 51 50 51 98 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:10:51 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 50 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2jqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2jqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2jqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.126,StartTime:2020-08-21 06:10:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.126,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 06:10:51.951: INFO: Pod "webserver-deployment-6676bcd6d4-xqgd6" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-xqgd6 webserver-deployment-6676bcd6d4- deployment-7515 /api/v1/namespaces/deployment-7515/pods/webserver-deployment-6676bcd6d4-xqgd6 0a6d6e9a-7df0-436e-b13a-b8dc1817727d 2014678 0 2020-08-21 06:10:47 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 641f7216-3de5-4a74-9055-78340c4323bb 0x972f037 0x972f038}] [] [{kube-controller-manager Update v1 2020-08-21 06:10:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 52 49 102 55 50 49 54 45 51 100 101 53 45 52 97 55 52 45 57 48 53 53 45 55 56 51 52 48 99 52 51 50 51 98 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:10:48 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2jqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2jqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2jqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-21 06:10:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 06:10:51.953: INFO: Pod "webserver-deployment-84855cf797-474fj" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-474fj webserver-deployment-84855cf797- deployment-7515 /api/v1/namespaces/deployment-7515/pods/webserver-deployment-84855cf797-474fj dcc2481d-5a92-4943-ac52-1972fb0d28b4 2014649 0 2020-08-21 06:10:47 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 04ef2c78-ddc2-42ff-a5c2-5c14ed4dc6d2 0x972f1e7 0x972f1e8}] [] [{kube-controller-manager Update v1 2020-08-21 06:10:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 52 101 102 50 99 55 56 45 100 100 99 50 45 52 50 102 102 45 97 53 99 50 45 53 99 49 52 101 100 52 100 99 54 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:10:48 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2jqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2jqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2jqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-08-21 06:10:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 06:10:51.954: INFO: Pod "webserver-deployment-84855cf797-4nknp" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-4nknp webserver-deployment-84855cf797- deployment-7515 /api/v1/namespaces/deployment-7515/pods/webserver-deployment-84855cf797-4nknp c2001046-3047-487c-9e5e-16e8c10a6d8f 2014426 0 2020-08-21 06:10:29 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 04ef2c78-ddc2-42ff-a5c2-5c14ed4dc6d2 0x972f377 0x972f378}] [] [{kube-controller-manager Update v1 2020-08-21 06:10:29 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 52 101 102 50 99 55 56 45 100 100 99 50 45 52 50 102 102 45 97 53 99 50 45 53 99 49 52 101 100 52 100 99 54 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:10:38 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 51 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2jqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2jqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2jqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.134,StartTime:2020-08-21 06:10:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 06:10:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0c8525b5d98ab3a4ae71dc94546d448f14a9cfc0a3ae1d8a699f0e7b0e97de30,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.134,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 06:10:51.956: INFO: Pod "webserver-deployment-84855cf797-4wh6v" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-4wh6v webserver-deployment-84855cf797- deployment-7515 /api/v1/namespaces/deployment-7515/pods/webserver-deployment-84855cf797-4wh6v 6c10553e-c983-48a7-8635-7c913c472ba2 2014630 0 2020-08-21 06:10:47 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 04ef2c78-ddc2-42ff-a5c2-5c14ed4dc6d2 0x972f527 0x972f528}] [] [{kube-controller-manager Update v1 2020-08-21 06:10:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 52 101 102 50 99 55 56 45 100 100 99 50 45 52 50 102 102 45 97 53 99 50 45 53 99 49 52 101 100 52 100 99 54 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:10:47 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2jqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2jqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2jqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-08-21 06:10:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 06:10:51.958: INFO: Pod "webserver-deployment-84855cf797-6bgkx" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-6bgkx webserver-deployment-84855cf797- deployment-7515 /api/v1/namespaces/deployment-7515/pods/webserver-deployment-84855cf797-6bgkx 95ec1194-64b7-4bf9-8fe0-e3431c049de2 2014610 0 2020-08-21 06:10:47 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 04ef2c78-ddc2-42ff-a5c2-5c14ed4dc6d2 0x972f6b7 0x972f6b8}] [] [{kube-controller-manager Update v1 2020-08-21 06:10:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 52 101 102 50 99 55 56 45 100 100 99 50 45 52 50 102 102 45 97 53 99 50 45 53 99 49 52 101 100 52 100 99 54 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:10:47 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2jqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2jqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2jqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-08-21 06:10:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 06:10:51.960: INFO: Pod "webserver-deployment-84855cf797-7qxjb" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-7qxjb webserver-deployment-84855cf797- deployment-7515 /api/v1/namespaces/deployment-7515/pods/webserver-deployment-84855cf797-7qxjb 3fbf0026-87c2-4900-ba60-e5e54f73efc2 2014686 0 2020-08-21 06:10:47 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 04ef2c78-ddc2-42ff-a5c2-5c14ed4dc6d2 0x972f847 0x972f848}] [] [{kube-controller-manager Update v1 2020-08-21 06:10:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 52 101 102 50 99 55 56 45 100 100 99 50 45 52 50 102 102 45 97 53 99 50 45 53 99 49 52 101 100 52 100 99 54 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:10:49 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2jqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2jqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2jqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-08-21 06:10:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 06:10:51.962: INFO: Pod "webserver-deployment-84855cf797-84zgt" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-84zgt webserver-deployment-84855cf797- deployment-7515 /api/v1/namespaces/deployment-7515/pods/webserver-deployment-84855cf797-84zgt f0f828e9-d0ac-4212-ab0d-2da0b47ad670 2014648 0 2020-08-21 06:10:47 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 04ef2c78-ddc2-42ff-a5c2-5c14ed4dc6d2 0x972f9d7 0x972f9d8}] [] [{kube-controller-manager Update v1 2020-08-21 06:10:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 52 101 102 50 99 55 56 45 100 100 99 50 45 52 50 102 102 45 97 53 99 50 45 53 99 49 52 101 100 52 100 99 54 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:10:48 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2jqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2jqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2jqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-21 06:10:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 06:10:51.963: INFO: Pod "webserver-deployment-84855cf797-9ch7n" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-9ch7n webserver-deployment-84855cf797- deployment-7515 /api/v1/namespaces/deployment-7515/pods/webserver-deployment-84855cf797-9ch7n ac8666ff-4c77-4e5c-9851-b532f9c5669c 2014644 0 2020-08-21 06:10:47 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 04ef2c78-ddc2-42ff-a5c2-5c14ed4dc6d2 0x972fb67 0x972fb68}] [] [{kube-controller-manager Update v1 2020-08-21 06:10:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 52 101 102 50 99 55 56 45 100 100 99 50 45 52 50 102 102 45 97 53 99 50 45 53 99 49 52 101 100 52 100 99 54 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:10:48 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2jqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2jqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2jqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-08-21 06:10:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 06:10:51.965: INFO: Pod "webserver-deployment-84855cf797-cxzd5" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-cxzd5 webserver-deployment-84855cf797- deployment-7515 /api/v1/namespaces/deployment-7515/pods/webserver-deployment-84855cf797-cxzd5 c72d6b5e-1d40-482b-9df5-60412ba192a2 2014377 0 2020-08-21 06:10:29 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 04ef2c78-ddc2-42ff-a5c2-5c14ed4dc6d2 0x972fcf7 0x972fcf8}] [] [{kube-controller-manager Update v1 2020-08-21 06:10:29 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 52 101 102 50 99 55 56 45 100 100 99 50 45 52 50 102 102 45 97 53 99 50 45 53 99 49 52 101 100 52 100 99 54 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:10:33 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 51 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2jqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2jqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2jqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.131,StartTime:2020-08-21 06:10:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 06:10:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c67f11d99f94a55695714700e7745bd6f96f211ec519b47faf327bafc4776156,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.131,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 06:10:51.967: INFO: Pod "webserver-deployment-84855cf797-dnkcs" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-dnkcs webserver-deployment-84855cf797- deployment-7515 /api/v1/namespaces/deployment-7515/pods/webserver-deployment-84855cf797-dnkcs b127937f-a317-4f1a-affd-9760076980e8 2014439 0 2020-08-21 06:10:29 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 04ef2c78-ddc2-42ff-a5c2-5c14ed4dc6d2 0x972fea7 0x972fea8}] [] [{kube-controller-manager Update v1 2020-08-21 06:10:29 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 52 101 102 50 99 55 56 45 100 100 99 50 45 52 50 102 102 45 97 53 99 50 45 53 99 49 52 101 100 52 100 99 54 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:10:39 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 51 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2jqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2jqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2jqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.132,StartTime:2020-08-21 06:10:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 06:10:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f665c3589d0675d24b62c9e09be2a834e7dfa767d5fec5e48a30344ebc10c708,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.132,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 06:10:51.969: INFO: Pod "webserver-deployment-84855cf797-h9br2" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-h9br2 webserver-deployment-84855cf797- deployment-7515 /api/v1/namespaces/deployment-7515/pods/webserver-deployment-84855cf797-h9br2 4f66b4c9-1113-467f-839e-b89310ad71f6 2014432 0 2020-08-21 06:10:29 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 04ef2c78-ddc2-42ff-a5c2-5c14ed4dc6d2 0x9460057 0x9460058}] [] [{kube-controller-manager Update v1 2020-08-21 06:10:29 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 52 101 102 50 99 55 56 45 100 100 99 50 45 52 50 102 102 45 97 53 99 50 45 53 99 49 52 101 100 52 100 99 54 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:10:39 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 50 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2jqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2jqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2jqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.122,StartTime:2020-08-21 06:10:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 06:10:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://fc4cac5727a0ffc347d9b015f9b6fb6244516c170b0d08e2e627d5cadaf32391,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.122,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 06:10:51.971: INFO: Pod "webserver-deployment-84855cf797-lg865" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-lg865 webserver-deployment-84855cf797- deployment-7515 /api/v1/namespaces/deployment-7515/pods/webserver-deployment-84855cf797-lg865 be64bf46-4992-4fb5-afe7-d81f59e5379a 2014465 0 2020-08-21 06:10:29 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 04ef2c78-ddc2-42ff-a5c2-5c14ed4dc6d2 0x9460207 0x9460208}] [] [{kube-controller-manager Update v1 2020-08-21 06:10:29 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 52 101 102 50 99 55 56 45 100 100 99 50 45 52 50 102 102 45 97 53 99 50 45 53 99 49 52 101 100 52 100 99 54 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:10:41 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 50 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2jqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2jqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2jqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.124,StartTime:2020-08-21 06:10:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 06:10:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b123f92182fe17057ce04169f7596a3183fd42cc5cf270f50176dfa95316392f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.124,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 06:10:51.973: INFO: Pod "webserver-deployment-84855cf797-mc84r" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-mc84r webserver-deployment-84855cf797- deployment-7515 /api/v1/namespaces/deployment-7515/pods/webserver-deployment-84855cf797-mc84r c649e8be-086d-4ac5-aee8-4ccbae40c8fc 2014666 0 2020-08-21 06:10:47 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 04ef2c78-ddc2-42ff-a5c2-5c14ed4dc6d2 0x94603b7 0x94603b8}] [] [{kube-controller-manager Update v1 2020-08-21 06:10:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 52 101 102 50 99 55 56 45 100 100 99 50 45 52 50 102 102 45 97 53 99 50 45 53 99 49 52 101 100 52 100 99 54 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:10:48 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2jqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2jqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2jqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-21 06:10:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 06:10:51.974: INFO: Pod "webserver-deployment-84855cf797-mcbp9" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-mcbp9 webserver-deployment-84855cf797- deployment-7515 /api/v1/namespaces/deployment-7515/pods/webserver-deployment-84855cf797-mcbp9 ffa832db-8b0b-4f02-9c07-c1cd01a91846 2014433 0 2020-08-21 06:10:29 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 04ef2c78-ddc2-42ff-a5c2-5c14ed4dc6d2 0x9460547 0x9460548}] [] [{kube-controller-manager Update v1 2020-08-21 06:10:29 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 52 101 102 50 99 55 56 45 100 100 99 50 45 52 50 102 102 45 97 53 99 50 45 53 99 49 52 101 100 52 100 99 54 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:10:39 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 51 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2jqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2jqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2jqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.135,StartTime:2020-08-21 06:10:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 06:10:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3ecaa8da2a90393633521705e7a25eabd7d3ce3dded96ba1bdf67bb1b83abe08,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.135,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 06:10:51.976: INFO: Pod "webserver-deployment-84855cf797-n72kj" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-n72kj webserver-deployment-84855cf797- deployment-7515 /api/v1/namespaces/deployment-7515/pods/webserver-deployment-84855cf797-n72kj d00505bc-2b19-42df-a51a-958ee246e19b 2014652 0 2020-08-21 06:10:47 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 04ef2c78-ddc2-42ff-a5c2-5c14ed4dc6d2 0x94606f7 0x94606f8}] [] [{kube-controller-manager Update v1 2020-08-21 06:10:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 52 101 102 50 99 55 56 45 100 100 99 50 45 52 50 102 102 45 97 53 99 50 45 53 99 49 52 101 100 52 100 99 54 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:10:48 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2jqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2jqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2jqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-08-21 06:10:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 06:10:51.978: INFO: Pod "webserver-deployment-84855cf797-nx8st" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-nx8st webserver-deployment-84855cf797- deployment-7515 /api/v1/namespaces/deployment-7515/pods/webserver-deployment-84855cf797-nx8st 6eab29e7-0012-46b1-b430-8a59341250d8 2014420 0 2020-08-21 06:10:29 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 04ef2c78-ddc2-42ff-a5c2-5c14ed4dc6d2 0x9460887 0x9460888}] [] [{kube-controller-manager Update v1 2020-08-21 06:10:29 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 52 101 102 50 99 55 56 45 100 100 99 50 45 52 50 102 102 45 97 53 99 50 45 53 99 49 52 101 100 52 100 99 54 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:10:38 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 51 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2jqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2jqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2jqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.133,StartTime:2020-08-21 06:10:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 06:10:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://89bcb78f7b56d886f030de63bfa99407bb8fe3c054f1f96d97bfdf2239730622,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.133,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 06:10:51.980: INFO: Pod "webserver-deployment-84855cf797-qx4mb" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-qx4mb webserver-deployment-84855cf797- deployment-7515 /api/v1/namespaces/deployment-7515/pods/webserver-deployment-84855cf797-qx4mb f1b21d97-b421-46b7-b1ef-ce8562d9a1dd 2014687 0 2020-08-21 06:10:47 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 04ef2c78-ddc2-42ff-a5c2-5c14ed4dc6d2 0x9460a37 0x9460a38}] [] [{kube-controller-manager Update v1 2020-08-21 06:10:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 52 101 102 50 99 55 56 45 100 100 99 50 45 52 50 102 102 45 97 53 99 50 45 53 99 49 52 101 100 52 100 99 54 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:10:49 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2jqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2jqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2jqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-21 06:10:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 06:10:51.982: INFO: Pod "webserver-deployment-84855cf797-v5pxq" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-v5pxq webserver-deployment-84855cf797- deployment-7515 /api/v1/namespaces/deployment-7515/pods/webserver-deployment-84855cf797-v5pxq d3ee6418-c233-4f7c-822a-6415d0bf95f6 2014675 0 2020-08-21 06:10:47 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 04ef2c78-ddc2-42ff-a5c2-5c14ed4dc6d2 0x9460bc7 0x9460bc8}] [] [{kube-controller-manager Update v1 2020-08-21 06:10:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 52 101 102 50 99 55 56 45 100 100 99 50 45 52 50 102 102 45 97 53 99 50 45 53 99 49 52 101 100 52 100 99 54 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:10:48 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2jqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2jqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2jqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-08-21 06:10:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 06:10:51.984: INFO: Pod "webserver-deployment-84855cf797-vm8gv" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-vm8gv webserver-deployment-84855cf797- deployment-7515 /api/v1/namespaces/deployment-7515/pods/webserver-deployment-84855cf797-vm8gv 421224fe-6d12-4a33-a64f-a07b25473c7a 2014653 0 2020-08-21 06:10:47 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 04ef2c78-ddc2-42ff-a5c2-5c14ed4dc6d2 0x9460d67 0x9460d68}] [] [{kube-controller-manager Update v1 2020-08-21 06:10:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 52 101 102 50 99 55 56 45 100 100 99 50 45 52 50 102 102 45 97 53 99 50 45 53 99 49 52 101 100 52 100 99 54 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:10:48 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2jqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2jqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2jqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-21 06:10:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 06:10:51.985: INFO: Pod "webserver-deployment-84855cf797-w8hc6" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-w8hc6 webserver-deployment-84855cf797- deployment-7515 /api/v1/namespaces/deployment-7515/pods/webserver-deployment-84855cf797-w8hc6 487645fd-ad2d-4ffb-b8a5-cf352c65bf11 2014633 0 2020-08-21 06:10:47 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 04ef2c78-ddc2-42ff-a5c2-5c14ed4dc6d2 0x9460ef7 0x9460ef8}] [] [{kube-controller-manager Update v1 2020-08-21 06:10:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 52 101 102 50 99 55 56 45 100 100 99 50 45 52 50 102 102 45 97 53 99 50 45 53 99 49 52 101 100 52 100 99 54 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:10:47 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2jqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2jqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2jqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-21 06:10:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 06:10:51.987: INFO: Pod "webserver-deployment-84855cf797-z4zc9" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-z4zc9 webserver-deployment-84855cf797- deployment-7515 /api/v1/namespaces/deployment-7515/pods/webserver-deployment-84855cf797-z4zc9 42906d97-8498-449d-a099-e35ebb0ddc08 2014438 0 2020-08-21 06:10:29 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 04ef2c78-ddc2-42ff-a5c2-5c14ed4dc6d2 0x9461087 0x9461088}] [] [{kube-controller-manager Update v1 2020-08-21 06:10:29 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 52 101 102 50 99 55 56 45 100 100 99 50 45 52 50 102 102 45 97 53 99 50 45 53 99 49 52 101 100 52 100 99 54 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:10:39 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 50 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q2jqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q2jqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q2jqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:10:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.121,StartTime:2020-08-21 06:10:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 06:10:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://36ec5fd54b901a2a0306a00b354b2efab85a2d76ecc0b46b1e90744c420a2b17,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.121,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:10:51.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7515" for this suite. • [SLOW TEST:23.279 seconds] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":51,"skipped":707,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:10:52.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-27cf2418-c123-4646-82d5-a796abb0788a STEP: Creating a pod to test consume configMaps Aug 21 06:10:52.953: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-849e70f3-d0bf-4e27-a391-8205af24413a" in namespace "projected-9681" to be "Succeeded or Failed" Aug 21 06:10:53.209: INFO: Pod "pod-projected-configmaps-849e70f3-d0bf-4e27-a391-8205af24413a": Phase="Pending", Reason="", readiness=false. Elapsed: 254.95278ms Aug 21 06:10:55.269: INFO: Pod "pod-projected-configmaps-849e70f3-d0bf-4e27-a391-8205af24413a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31547914s Aug 21 06:10:58.348: INFO: Pod "pod-projected-configmaps-849e70f3-d0bf-4e27-a391-8205af24413a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.393873001s Aug 21 06:11:00.897: INFO: Pod "pod-projected-configmaps-849e70f3-d0bf-4e27-a391-8205af24413a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.943337966s Aug 21 06:11:03.475: INFO: Pod "pod-projected-configmaps-849e70f3-d0bf-4e27-a391-8205af24413a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.521439572s Aug 21 06:11:05.736: INFO: Pod "pod-projected-configmaps-849e70f3-d0bf-4e27-a391-8205af24413a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.782414982s Aug 21 06:11:07.747: INFO: Pod "pod-projected-configmaps-849e70f3-d0bf-4e27-a391-8205af24413a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.793370164s Aug 21 06:11:10.820: INFO: Pod "pod-projected-configmaps-849e70f3-d0bf-4e27-a391-8205af24413a": Phase="Running", Reason="", readiness=true. Elapsed: 17.866774099s Aug 21 06:11:12.892: INFO: Pod "pod-projected-configmaps-849e70f3-d0bf-4e27-a391-8205af24413a": Phase="Running", Reason="", readiness=true. Elapsed: 19.937985135s Aug 21 06:11:15.137: INFO: Pod "pod-projected-configmaps-849e70f3-d0bf-4e27-a391-8205af24413a": Phase="Running", Reason="", readiness=true. Elapsed: 22.183217955s Aug 21 06:11:17.376: INFO: Pod "pod-projected-configmaps-849e70f3-d0bf-4e27-a391-8205af24413a": Phase="Running", Reason="", readiness=true. Elapsed: 24.422714182s Aug 21 06:11:19.387: INFO: Pod "pod-projected-configmaps-849e70f3-d0bf-4e27-a391-8205af24413a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.433801968s STEP: Saw pod success Aug 21 06:11:19.388: INFO: Pod "pod-projected-configmaps-849e70f3-d0bf-4e27-a391-8205af24413a" satisfied condition "Succeeded or Failed" Aug 21 06:11:19.592: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-849e70f3-d0bf-4e27-a391-8205af24413a container projected-configmap-volume-test: STEP: delete the pod Aug 21 06:11:20.338: INFO: Waiting for pod pod-projected-configmaps-849e70f3-d0bf-4e27-a391-8205af24413a to disappear Aug 21 06:11:20.366: INFO: Pod pod-projected-configmaps-849e70f3-d0bf-4e27-a391-8205af24413a no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:11:20.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9681" for this suite. • [SLOW TEST:27.915 seconds] [sig-storage] Projected configMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":52,"skipped":709,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:11:20.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:11:20.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7456" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":53,"skipped":715,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:11:20.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 21 06:11:20.841: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 21 06:11:30.606: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5343 create -f -' Aug 21 06:11:35.122: INFO: stderr: "" Aug 21 06:11:35.122: INFO: stdout: "e2e-test-crd-publish-openapi-7568-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Aug 21 06:11:35.122: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5343 delete e2e-test-crd-publish-openapi-7568-crds test-cr' Aug 21 06:11:36.267: INFO: stderr: "" Aug 21 06:11:36.267: INFO: stdout: "e2e-test-crd-publish-openapi-7568-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Aug 21 06:11:36.268: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5343 apply -f -' Aug 21 06:11:37.823: INFO: stderr: "" Aug 21 06:11:37.823: INFO: stdout: "e2e-test-crd-publish-openapi-7568-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Aug 21 06:11:37.823: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5343 delete e2e-test-crd-publish-openapi-7568-crds test-cr' Aug 21 06:11:38.934: INFO: stderr: "" Aug 21 06:11:38.935: INFO: stdout: "e2e-test-crd-publish-openapi-7568-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Aug 21 06:11:38.935: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7568-crds' Aug 21 06:11:40.399: INFO: stderr: "" Aug 21 06:11:40.399: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7568-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:11:50.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5343" for this suite. • [SLOW TEST:29.403 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":54,"skipped":759,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:11:50.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 21 06:11:50.275: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 06:11:50.282: INFO: Number of nodes with available pods: 0 Aug 21 06:11:50.282: INFO: Node kali-worker is running more than one daemon pod Aug 21 06:11:51.295: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 06:11:51.383: INFO: Number of nodes with available pods: 0 Aug 21 06:11:51.383: INFO: Node kali-worker is running more than one daemon pod Aug 21 06:11:52.565: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 06:11:52.845: INFO: Number of nodes with available pods: 0 Aug 21 06:11:52.845: INFO: Node kali-worker is running more than one daemon pod Aug 21 06:11:53.296: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 06:11:53.303: INFO: Number of nodes with available pods: 0 Aug 21 06:11:53.303: INFO: Node kali-worker is running more than one daemon pod Aug 21 06:11:55.444: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 06:11:55.474: INFO: Number of nodes with available pods: 1 Aug 21 06:11:55.474: INFO: Node kali-worker is running more than one daemon pod Aug 21 06:11:56.293: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 06:11:56.301: INFO: Number of nodes with available pods: 2 Aug 21 06:11:56.301: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Aug 21 06:11:56.359: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 06:11:56.372: INFO: Number of nodes with available pods: 2 Aug 21 06:11:56.372: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8636, will wait for the garbage collector to delete the pods Aug 21 06:11:57.692: INFO: Deleting DaemonSet.extensions daemon-set took: 225.755428ms Aug 21 06:11:58.093: INFO: Terminating DaemonSet.extensions daemon-set pods took: 401.119322ms Aug 21 06:12:09.200: INFO: Number of nodes with available pods: 0 Aug 21 06:12:09.200: INFO: Number of running nodes: 0, number of available pods: 0 Aug 21 06:12:09.226: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8636/daemonsets","resourceVersion":"2015296"},"items":null} Aug 21 06:12:09.235: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8636/pods","resourceVersion":"2015296"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:12:09.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8636" for this suite. • [SLOW TEST:19.135 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":55,"skipped":825,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:12:09.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-b771c768-70d5-445e-805c-f9927e2c0ff2 STEP: Creating a pod to test consume configMaps Aug 21 06:12:09.379: INFO: Waiting up to 5m0s for pod "pod-configmaps-3973397b-07ac-4dea-a288-d5a8e4db1820" in namespace "configmap-9773" to be "Succeeded or Failed" Aug 21 06:12:09.392: INFO: Pod "pod-configmaps-3973397b-07ac-4dea-a288-d5a8e4db1820": Phase="Pending", Reason="", readiness=false. Elapsed: 13.024187ms Aug 21 06:12:11.407: INFO: Pod "pod-configmaps-3973397b-07ac-4dea-a288-d5a8e4db1820": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027515196s Aug 21 06:12:13.414: INFO: Pod "pod-configmaps-3973397b-07ac-4dea-a288-d5a8e4db1820": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034238398s STEP: Saw pod success Aug 21 06:12:13.414: INFO: Pod "pod-configmaps-3973397b-07ac-4dea-a288-d5a8e4db1820" satisfied condition "Succeeded or Failed" Aug 21 06:12:13.419: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-3973397b-07ac-4dea-a288-d5a8e4db1820 container configmap-volume-test: STEP: delete the pod Aug 21 06:12:13.479: INFO: Waiting for pod pod-configmaps-3973397b-07ac-4dea-a288-d5a8e4db1820 to disappear Aug 21 06:12:13.490: INFO: Pod pod-configmaps-3973397b-07ac-4dea-a288-d5a8e4db1820 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:12:13.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9773" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":56,"skipped":829,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:12:13.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating cluster-info Aug 21 06:12:13.609: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config cluster-info' Aug 21 06:12:14.707: INFO: stderr: "" Aug 21 06:12:14.707: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32915\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32915/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:12:14.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6964" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":275,"completed":57,"skipped":873,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:12:14.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-ad0793fd-fc16-4393-9fb5-f305aa2d4196 STEP: Creating a pod to test consume configMaps Aug 21 06:12:14.804: INFO: Waiting up to 5m0s for pod "pod-configmaps-2b0a02bd-2380-4754-bd06-29a8c4574cc6" in namespace "configmap-3614" to be "Succeeded or Failed" Aug 21 06:12:14.814: INFO: Pod "pod-configmaps-2b0a02bd-2380-4754-bd06-29a8c4574cc6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.363202ms Aug 21 06:12:16.821: INFO: Pod "pod-configmaps-2b0a02bd-2380-4754-bd06-29a8c4574cc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017226032s Aug 21 06:12:18.830: INFO: Pod "pod-configmaps-2b0a02bd-2380-4754-bd06-29a8c4574cc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025673878s STEP: Saw pod success Aug 21 06:12:18.830: INFO: Pod "pod-configmaps-2b0a02bd-2380-4754-bd06-29a8c4574cc6" satisfied condition "Succeeded or Failed" Aug 21 06:12:18.850: INFO: Trying to get logs from node kali-worker pod pod-configmaps-2b0a02bd-2380-4754-bd06-29a8c4574cc6 container configmap-volume-test: STEP: delete the pod Aug 21 06:12:18.890: INFO: Waiting for pod pod-configmaps-2b0a02bd-2380-4754-bd06-29a8c4574cc6 to disappear Aug 21 06:12:18.894: INFO: Pod pod-configmaps-2b0a02bd-2380-4754-bd06-29a8c4574cc6 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:12:18.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3614" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":58,"skipped":898,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:12:18.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 21 06:12:18.983: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:12:20.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3561" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":275,"completed":59,"skipped":920,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:12:20.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service multi-endpoint-test in namespace services-7514 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7514 to expose endpoints map[] Aug 21 06:12:20.405: INFO: successfully validated that service multi-endpoint-test in namespace services-7514 exposes endpoints map[] (26.616152ms elapsed) STEP: Creating pod pod1 in namespace services-7514 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7514 to expose endpoints map[pod1:[100]] Aug 21 06:12:23.601: INFO: successfully validated that service multi-endpoint-test in namespace services-7514 exposes endpoints map[pod1:[100]] (3.171347109s elapsed) STEP: Creating pod pod2 in namespace services-7514 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7514 to expose endpoints map[pod1:[100] pod2:[101]] Aug 21 06:12:26.819: INFO: successfully validated that service multi-endpoint-test in namespace services-7514 exposes endpoints map[pod1:[100] pod2:[101]] (3.210232929s elapsed) STEP: Deleting pod pod1 in namespace services-7514 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7514 to expose endpoints map[pod2:[101]] Aug 21 06:12:26.885: INFO: successfully validated that service multi-endpoint-test in namespace services-7514 exposes endpoints map[pod2:[101]] (58.730727ms elapsed) STEP: Deleting pod pod2 in namespace services-7514 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7514 to expose endpoints map[] Aug 21 06:12:26.930: INFO: successfully validated that service multi-endpoint-test in namespace services-7514 exposes endpoints map[] (17.103487ms elapsed) [AfterEach] [sig-network] Services /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:12:27.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7514" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:7.185 seconds] [sig-network] Services /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":275,"completed":60,"skipped":933,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:12:27.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 21 06:12:34.962: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 21 06:12:36.982: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587154, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587154, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587155, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587154, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 06:12:39.010: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587154, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587154, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587155, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587154, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 21 06:12:42.030: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:12:42.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5213" for this suite. STEP: Destroying namespace "webhook-5213-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.113 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":61,"skipped":939,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:12:42.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-6167 [It] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-6167 STEP: Creating statefulset with conflicting port in namespace statefulset-6167 STEP: Waiting until pod test-pod will start running in namespace statefulset-6167 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6167 Aug 21 06:12:48.590: INFO: Observed stateful pod in namespace: statefulset-6167, name: ss-0, uid: 53380403-10dc-405a-b047-800b5eefec80, status phase: Pending. Waiting for statefulset controller to delete. Aug 21 06:12:49.400: INFO: Observed stateful pod in namespace: statefulset-6167, name: ss-0, uid: 53380403-10dc-405a-b047-800b5eefec80, status phase: Failed. Waiting for statefulset controller to delete. Aug 21 06:12:49.608: INFO: Observed stateful pod in namespace: statefulset-6167, name: ss-0, uid: 53380403-10dc-405a-b047-800b5eefec80, status phase: Failed. Waiting for statefulset controller to delete. Aug 21 06:12:49.627: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6167 STEP: Removing pod with conflicting port in namespace statefulset-6167 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6167 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Aug 21 06:12:55.740: INFO: Deleting all statefulset in ns statefulset-6167 Aug 21 06:12:55.744: INFO: Scaling statefulset ss to 0 Aug 21 06:13:15.821: INFO: Waiting for statefulset status.replicas updated to 0 Aug 21 06:13:15.826: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:13:15.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6167" for this suite. • [SLOW TEST:33.467 seconds] [sig-apps] StatefulSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Should recreate evicted statefulset [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":62,"skipped":967,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:13:15.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:13:21.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9882" for this suite. • [SLOW TEST:5.266 seconds] [sig-apps] ReplicationController /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":63,"skipped":977,"failed":0} SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:13:21.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 21 06:13:29.366: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 21 06:13:29.389: INFO: Pod pod-with-poststart-http-hook still exists Aug 21 06:13:31.390: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 21 06:13:31.396: INFO: Pod pod-with-poststart-http-hook still exists Aug 21 06:13:33.390: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 21 06:13:33.396: INFO: Pod pod-with-poststart-http-hook still exists Aug 21 06:13:35.390: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 21 06:13:35.396: INFO: Pod pod-with-poststart-http-hook still exists Aug 21 06:13:37.390: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 21 06:13:37.397: INFO: Pod pod-with-poststart-http-hook still exists Aug 21 06:13:39.390: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 21 06:13:39.398: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:13:39.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8387" for this suite. • [SLOW TEST:18.291 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":64,"skipped":979,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:13:39.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Aug 21 06:13:47.574: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 21 06:13:47.611: INFO: Pod pod-with-prestop-exec-hook still exists Aug 21 06:13:49.612: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 21 06:13:49.620: INFO: Pod pod-with-prestop-exec-hook still exists Aug 21 06:13:51.612: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 21 06:13:51.644: INFO: Pod pod-with-prestop-exec-hook still exists Aug 21 06:13:53.613: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 21 06:13:53.620: INFO: Pod pod-with-prestop-exec-hook still exists Aug 21 06:13:55.612: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 21 06:13:55.619: INFO: Pod pod-with-prestop-exec-hook still exists Aug 21 06:13:57.612: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 21 06:13:57.619: INFO: Pod pod-with-prestop-exec-hook still exists Aug 21 06:13:59.612: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 21 06:13:59.618: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:13:59.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6431" for this suite. • [SLOW TEST:20.224 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":65,"skipped":1005,"failed":0} SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:13:59.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Aug 21 06:13:59.730: INFO: Waiting up to 5m0s for pod "downward-api-61fc88a7-9d4c-47cf-8eac-a68e019cd436" in namespace "downward-api-8840" to be "Succeeded or Failed" Aug 21 06:13:59.747: INFO: Pod "downward-api-61fc88a7-9d4c-47cf-8eac-a68e019cd436": Phase="Pending", Reason="", readiness=false. Elapsed: 16.380685ms Aug 21 06:14:01.775: INFO: Pod "downward-api-61fc88a7-9d4c-47cf-8eac-a68e019cd436": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044062894s Aug 21 06:14:03.782: INFO: Pod "downward-api-61fc88a7-9d4c-47cf-8eac-a68e019cd436": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051028841s STEP: Saw pod success Aug 21 06:14:03.782: INFO: Pod "downward-api-61fc88a7-9d4c-47cf-8eac-a68e019cd436" satisfied condition "Succeeded or Failed" Aug 21 06:14:03.786: INFO: Trying to get logs from node kali-worker2 pod downward-api-61fc88a7-9d4c-47cf-8eac-a68e019cd436 container dapi-container: STEP: delete the pod Aug 21 06:14:03.844: INFO: Waiting for pod downward-api-61fc88a7-9d4c-47cf-8eac-a68e019cd436 to disappear Aug 21 06:14:03.848: INFO: Pod downward-api-61fc88a7-9d4c-47cf-8eac-a68e019cd436 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:14:03.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8840" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":66,"skipped":1016,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:14:03.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1551.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-1551.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1551.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1551.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-1551.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1551.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 21 06:14:12.093: INFO: DNS probes using dns-1551/dns-test-0fe08e06-3530-4ba3-af69-1d6c7e4bf2d9 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:14:12.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1551" for this suite. • [SLOW TEST:8.375 seconds] [sig-network] DNS /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":67,"skipped":1037,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:14:12.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-6669 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Aug 21 06:14:13.022: INFO: Found 0 stateful pods, waiting for 3 Aug 21 06:14:23.031: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 21 06:14:23.031: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 21 06:14:23.031: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Aug 21 06:14:33.050: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 21 06:14:33.050: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 21 06:14:33.050: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Aug 21 06:14:33.066: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6669 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 21 06:14:34.496: INFO: stderr: "I0821 06:14:34.345016 1442 log.go:172] (0x2cee070) (0x2cee0e0) Create stream\nI0821 06:14:34.347542 1442 log.go:172] (0x2cee070) (0x2cee0e0) Stream added, broadcasting: 1\nI0821 06:14:34.362510 1442 log.go:172] (0x2cee070) Reply frame received for 1\nI0821 06:14:34.363019 1442 log.go:172] (0x2cee070) (0x2cc2cb0) Create stream\nI0821 06:14:34.363099 1442 log.go:172] (0x2cee070) (0x2cc2cb0) Stream added, broadcasting: 3\nI0821 06:14:34.364430 1442 log.go:172] (0x2cee070) Reply frame received for 3\nI0821 06:14:34.364649 1442 log.go:172] (0x2cee070) (0x29d8230) Create stream\nI0821 06:14:34.364716 1442 log.go:172] (0x2cee070) (0x29d8230) Stream added, broadcasting: 5\nI0821 06:14:34.365924 1442 log.go:172] (0x2cee070) Reply frame received for 5\nI0821 06:14:34.446637 1442 log.go:172] (0x2cee070) Data frame received for 5\nI0821 06:14:34.446995 1442 log.go:172] (0x29d8230) (5) Data frame handling\nI0821 06:14:34.447979 1442 log.go:172] (0x29d8230) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 06:14:34.476493 1442 log.go:172] (0x2cee070) Data frame received for 3\nI0821 06:14:34.476685 1442 log.go:172] (0x2cc2cb0) (3) Data frame handling\nI0821 06:14:34.476860 1442 log.go:172] (0x2cee070) Data frame received for 5\nI0821 06:14:34.477005 1442 log.go:172] (0x29d8230) (5) Data frame handling\nI0821 06:14:34.477102 1442 log.go:172] (0x2cc2cb0) (3) Data frame sent\nI0821 06:14:34.477242 1442 log.go:172] (0x2cee070) Data frame received for 3\nI0821 06:14:34.477364 1442 log.go:172] (0x2cc2cb0) (3) Data frame handling\nI0821 06:14:34.477862 1442 log.go:172] (0x2cee070) Data frame received for 1\nI0821 06:14:34.478015 1442 log.go:172] (0x2cee0e0) (1) Data frame handling\nI0821 06:14:34.478140 1442 log.go:172] (0x2cee0e0) (1) Data frame sent\nI0821 06:14:34.478888 1442 log.go:172] (0x2cee070) (0x2cee0e0) Stream removed, broadcasting: 1\nI0821 06:14:34.480949 1442 log.go:172] (0x2cee070) Go away received\nI0821 06:14:34.482647 1442 log.go:172] (0x2cee070) (0x2cee0e0) Stream removed, broadcasting: 1\nI0821 06:14:34.482888 1442 log.go:172] (0x2cee070) (0x2cc2cb0) Stream removed, broadcasting: 3\nI0821 06:14:34.483190 1442 log.go:172] (0x2cee070) (0x29d8230) Stream removed, broadcasting: 5\n" Aug 21 06:14:34.497: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 21 06:14:34.497: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Aug 21 06:14:44.548: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Aug 21 06:14:54.639: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6669 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 06:14:56.037: INFO: stderr: "I0821 06:14:55.901772 1467 log.go:172] (0x31033b0) (0x3103420) Create stream\nI0821 06:14:55.903578 1467 log.go:172] (0x31033b0) (0x3103420) Stream added, broadcasting: 1\nI0821 06:14:55.917689 1467 log.go:172] (0x31033b0) Reply frame received for 1\nI0821 06:14:55.918278 1467 log.go:172] (0x31033b0) (0x28f0460) Create stream\nI0821 06:14:55.918353 1467 log.go:172] (0x31033b0) (0x28f0460) Stream added, broadcasting: 3\nI0821 06:14:55.919840 1467 log.go:172] (0x31033b0) Reply frame received for 3\nI0821 06:14:55.920133 1467 log.go:172] (0x31033b0) (0x2bde150) Create stream\nI0821 06:14:55.920213 1467 log.go:172] (0x31033b0) (0x2bde150) Stream added, broadcasting: 5\nI0821 06:14:55.921472 1467 log.go:172] (0x31033b0) Reply frame received for 5\nI0821 06:14:56.015409 1467 log.go:172] (0x31033b0) Data frame received for 3\nI0821 06:14:56.015669 1467 log.go:172] (0x28f0460) (3) Data frame handling\nI0821 06:14:56.015849 1467 log.go:172] (0x31033b0) Data frame received for 5\nI0821 06:14:56.015969 1467 log.go:172] (0x2bde150) (5) Data frame handling\nI0821 06:14:56.016082 1467 log.go:172] (0x2bde150) (5) Data frame sent\nI0821 06:14:56.016334 1467 log.go:172] (0x28f0460) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0821 06:14:56.016554 1467 log.go:172] (0x31033b0) Data frame received for 3\nI0821 06:14:56.016648 1467 log.go:172] (0x28f0460) (3) Data frame handling\nI0821 06:14:56.016851 1467 log.go:172] (0x31033b0) Data frame received for 5\nI0821 06:14:56.017015 1467 log.go:172] (0x31033b0) Data frame received for 1\nI0821 06:14:56.017229 1467 log.go:172] (0x3103420) (1) Data frame handling\nI0821 06:14:56.017359 1467 log.go:172] (0x2bde150) (5) Data frame handling\nI0821 06:14:56.017599 1467 log.go:172] (0x3103420) (1) Data frame sent\nI0821 06:14:56.018856 1467 log.go:172] (0x31033b0) (0x3103420) Stream removed, broadcasting: 1\nI0821 06:14:56.021731 1467 log.go:172] (0x31033b0) Go away received\nI0821 06:14:56.024524 1467 log.go:172] (0x31033b0) (0x3103420) Stream removed, broadcasting: 1\nI0821 06:14:56.024894 1467 log.go:172] (0x31033b0) (0x28f0460) Stream removed, broadcasting: 3\nI0821 06:14:56.025143 1467 log.go:172] (0x31033b0) (0x2bde150) Stream removed, broadcasting: 5\n" Aug 21 06:14:56.037: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 21 06:14:56.038: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 21 06:15:06.074: INFO: Waiting for StatefulSet statefulset-6669/ss2 to complete update Aug 21 06:15:06.075: INFO: Waiting for Pod statefulset-6669/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 21 06:15:06.075: INFO: Waiting for Pod statefulset-6669/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 21 06:15:06.075: INFO: Waiting for Pod statefulset-6669/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 21 06:15:16.088: INFO: Waiting for StatefulSet statefulset-6669/ss2 to complete update Aug 21 06:15:16.088: INFO: Waiting for Pod statefulset-6669/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 21 06:15:16.088: INFO: Waiting for Pod statefulset-6669/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 21 06:15:26.092: INFO: Waiting for StatefulSet statefulset-6669/ss2 to complete update Aug 21 06:15:26.092: INFO: Waiting for Pod statefulset-6669/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Aug 21 06:15:36.092: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6669 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 21 06:15:37.490: INFO: stderr: "I0821 06:15:37.343367 1490 log.go:172] (0x2b19c70) (0x2b19ce0) Create stream\nI0821 06:15:37.347996 1490 log.go:172] (0x2b19c70) (0x2b19ce0) Stream added, broadcasting: 1\nI0821 06:15:37.358151 1490 log.go:172] (0x2b19c70) Reply frame received for 1\nI0821 06:15:37.358871 1490 log.go:172] (0x2b19c70) (0x2b46070) Create stream\nI0821 06:15:37.358967 1490 log.go:172] (0x2b19c70) (0x2b46070) Stream added, broadcasting: 3\nI0821 06:15:37.360916 1490 log.go:172] (0x2b19c70) Reply frame received for 3\nI0821 06:15:37.361330 1490 log.go:172] (0x2b19c70) (0x2ce6070) Create stream\nI0821 06:15:37.361440 1490 log.go:172] (0x2b19c70) (0x2ce6070) Stream added, broadcasting: 5\nI0821 06:15:37.362898 1490 log.go:172] (0x2b19c70) Reply frame received for 5\nI0821 06:15:37.427370 1490 log.go:172] (0x2b19c70) Data frame received for 5\nI0821 06:15:37.427566 1490 log.go:172] (0x2ce6070) (5) Data frame handling\nI0821 06:15:37.427906 1490 log.go:172] (0x2ce6070) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 06:15:37.457793 1490 log.go:172] (0x2b19c70) Data frame received for 5\nI0821 06:15:37.458067 1490 log.go:172] (0x2ce6070) (5) Data frame handling\nI0821 06:15:37.458467 1490 log.go:172] (0x2b19c70) Data frame received for 3\nI0821 06:15:37.458761 1490 log.go:172] (0x2b46070) (3) Data frame handling\nI0821 06:15:37.458972 1490 log.go:172] (0x2b46070) (3) Data frame sent\nI0821 06:15:37.459141 1490 log.go:172] (0x2b19c70) Data frame received for 3\nI0821 06:15:37.459278 1490 log.go:172] (0x2b46070) (3) Data frame handling\nI0821 06:15:37.459725 1490 log.go:172] (0x2b19c70) Data frame received for 1\nI0821 06:15:37.459928 1490 log.go:172] (0x2b19ce0) (1) Data frame handling\nI0821 06:15:37.460100 1490 log.go:172] (0x2b19ce0) (1) Data frame sent\nI0821 06:15:37.461190 1490 log.go:172] (0x2b19c70) (0x2b19ce0) Stream removed, broadcasting: 1\nI0821 06:15:37.469764 1490 log.go:172] (0x2b19c70) Go away received\nI0821 06:15:37.477734 1490 log.go:172] (0x2b19c70) (0x2b19ce0) Stream removed, broadcasting: 1\nI0821 06:15:37.477963 1490 log.go:172] (0x2b19c70) (0x2b46070) Stream removed, broadcasting: 3\nI0821 06:15:37.478163 1490 log.go:172] (0x2b19c70) (0x2ce6070) Stream removed, broadcasting: 5\n" Aug 21 06:15:37.491: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 21 06:15:37.491: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 21 06:15:47.542: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Aug 21 06:15:57.609: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6669 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 06:15:58.985: INFO: stderr: "I0821 06:15:58.867624 1514 log.go:172] (0x2d18070) (0x2d180e0) Create stream\nI0821 06:15:58.870685 1514 log.go:172] (0x2d18070) (0x2d180e0) Stream added, broadcasting: 1\nI0821 06:15:58.886705 1514 log.go:172] (0x2d18070) Reply frame received for 1\nI0821 06:15:58.887183 1514 log.go:172] (0x2d18070) (0x28e04d0) Create stream\nI0821 06:15:58.887259 1514 log.go:172] (0x2d18070) (0x28e04d0) Stream added, broadcasting: 3\nI0821 06:15:58.888391 1514 log.go:172] (0x2d18070) Reply frame received for 3\nI0821 06:15:58.888604 1514 log.go:172] (0x2d18070) (0x28b20e0) Create stream\nI0821 06:15:58.888661 1514 log.go:172] (0x2d18070) (0x28b20e0) Stream added, broadcasting: 5\nI0821 06:15:58.889974 1514 log.go:172] (0x2d18070) Reply frame received for 5\nI0821 06:15:58.964920 1514 log.go:172] (0x2d18070) Data frame received for 5\nI0821 06:15:58.965383 1514 log.go:172] (0x2d18070) Data frame received for 3\nI0821 06:15:58.965556 1514 log.go:172] (0x28b20e0) (5) Data frame handling\nI0821 06:15:58.965818 1514 log.go:172] (0x28e04d0) (3) Data frame handling\nI0821 06:15:58.966096 1514 log.go:172] (0x2d18070) Data frame received for 1\nI0821 06:15:58.966275 1514 log.go:172] (0x2d180e0) (1) Data frame handling\nI0821 06:15:58.966653 1514 log.go:172] (0x2d180e0) (1) Data frame sent\nI0821 06:15:58.967101 1514 log.go:172] (0x28e04d0) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0821 06:15:58.967382 1514 log.go:172] (0x28b20e0) (5) Data frame sent\nI0821 06:15:58.967561 1514 log.go:172] (0x2d18070) Data frame received for 5\nI0821 06:15:58.967679 1514 log.go:172] (0x28b20e0) (5) Data frame handling\nI0821 06:15:58.967998 1514 log.go:172] (0x2d18070) Data frame received for 3\nI0821 06:15:58.968345 1514 log.go:172] (0x2d18070) (0x2d180e0) Stream removed, broadcasting: 1\nI0821 06:15:58.969973 1514 log.go:172] (0x28e04d0) (3) Data frame handling\nI0821 06:15:58.971292 1514 log.go:172] (0x2d18070) Go away received\nI0821 06:15:58.973686 1514 log.go:172] (0x2d18070) (0x2d180e0) Stream removed, broadcasting: 1\nI0821 06:15:58.973973 1514 log.go:172] (0x2d18070) (0x28e04d0) Stream removed, broadcasting: 3\nI0821 06:15:58.974482 1514 log.go:172] (0x2d18070) (0x28b20e0) Stream removed, broadcasting: 5\n" Aug 21 06:15:58.986: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 21 06:15:58.986: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 21 06:16:29.047: INFO: Waiting for StatefulSet statefulset-6669/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Aug 21 06:16:39.063: INFO: Deleting all statefulset in ns statefulset-6669 Aug 21 06:16:39.068: INFO: Scaling statefulset ss2 to 0 Aug 21 06:16:59.094: INFO: Waiting for statefulset status.replicas updated to 0 Aug 21 06:16:59.098: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:16:59.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6669" for this suite. • [SLOW TEST:166.857 seconds] [sig-apps] StatefulSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":68,"skipped":1039,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:16:59.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on tmpfs Aug 21 06:16:59.221: INFO: Waiting up to 5m0s for pod "pod-2e669845-7242-4774-ae9a-97c354ecbb99" in namespace "emptydir-2662" to be "Succeeded or Failed" Aug 21 06:16:59.231: INFO: Pod "pod-2e669845-7242-4774-ae9a-97c354ecbb99": Phase="Pending", Reason="", readiness=false. Elapsed: 9.218212ms Aug 21 06:17:01.237: INFO: Pod "pod-2e669845-7242-4774-ae9a-97c354ecbb99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01541779s Aug 21 06:17:03.286: INFO: Pod "pod-2e669845-7242-4774-ae9a-97c354ecbb99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064933021s STEP: Saw pod success Aug 21 06:17:03.287: INFO: Pod "pod-2e669845-7242-4774-ae9a-97c354ecbb99" satisfied condition "Succeeded or Failed" Aug 21 06:17:03.294: INFO: Trying to get logs from node kali-worker2 pod pod-2e669845-7242-4774-ae9a-97c354ecbb99 container test-container: STEP: delete the pod Aug 21 06:17:03.355: INFO: Waiting for pod pod-2e669845-7242-4774-ae9a-97c354ecbb99 to disappear Aug 21 06:17:03.365: INFO: Pod pod-2e669845-7242-4774-ae9a-97c354ecbb99 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:17:03.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2662" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":69,"skipped":1075,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:17:03.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 21 06:17:03.791: INFO: Waiting up to 5m0s for pod "pod-b68924a9-b8c2-4879-b69a-7796e91c4820" in namespace "emptydir-7111" to be "Succeeded or Failed" Aug 21 06:17:03.796: INFO: Pod "pod-b68924a9-b8c2-4879-b69a-7796e91c4820": Phase="Pending", Reason="", readiness=false. Elapsed: 4.758157ms Aug 21 06:17:05.803: INFO: Pod "pod-b68924a9-b8c2-4879-b69a-7796e91c4820": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012064901s Aug 21 06:17:07.811: INFO: Pod "pod-b68924a9-b8c2-4879-b69a-7796e91c4820": Phase="Running", Reason="", readiness=true. Elapsed: 4.019417631s Aug 21 06:17:09.819: INFO: Pod "pod-b68924a9-b8c2-4879-b69a-7796e91c4820": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027465352s STEP: Saw pod success Aug 21 06:17:09.819: INFO: Pod "pod-b68924a9-b8c2-4879-b69a-7796e91c4820" satisfied condition "Succeeded or Failed" Aug 21 06:17:09.825: INFO: Trying to get logs from node kali-worker pod pod-b68924a9-b8c2-4879-b69a-7796e91c4820 container test-container: STEP: delete the pod Aug 21 06:17:09.861: INFO: Waiting for pod pod-b68924a9-b8c2-4879-b69a-7796e91c4820 to disappear Aug 21 06:17:09.865: INFO: Pod pod-b68924a9-b8c2-4879-b69a-7796e91c4820 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:17:09.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7111" for this suite. • [SLOW TEST:6.497 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":70,"skipped":1103,"failed":0} S ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:17:09.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-baf10769-c601-48db-8af1-8c10853f48e1 STEP: Creating secret with name s-test-opt-upd-a2e38e75-83ca-40a5-8e60-77ae59094b95 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-baf10769-c601-48db-8af1-8c10853f48e1 STEP: Updating secret s-test-opt-upd-a2e38e75-83ca-40a5-8e60-77ae59094b95 STEP: Creating secret with name s-test-opt-create-216cab99-7b29-403f-997d-cdee46be18a2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:17:20.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-605" for this suite. • [SLOW TEST:10.379 seconds] [sig-storage] Projected secret /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":71,"skipped":1104,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:17:20.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 21 06:17:27.539: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 21 06:17:29.554: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587447, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587447, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587447, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587447, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 06:17:31.563: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587447, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587447, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587447, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587447, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 21 06:17:34.594: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:17:34.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6617" for this suite. STEP: Destroying namespace "webhook-6617-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.614 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":72,"skipped":1129,"failed":0} [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:17:34.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-2717/configmap-test-b95eb67d-c05f-4750-8e3b-a5f25bdeb3a6 STEP: Creating a pod to test consume configMaps Aug 21 06:17:34.983: INFO: Waiting up to 5m0s for pod "pod-configmaps-c03fca44-8b04-4f10-9082-93a945166c3d" in namespace "configmap-2717" to be "Succeeded or Failed" Aug 21 06:17:35.011: INFO: Pod "pod-configmaps-c03fca44-8b04-4f10-9082-93a945166c3d": Phase="Pending", Reason="", readiness=false. Elapsed: 27.733441ms Aug 21 06:17:37.019: INFO: Pod "pod-configmaps-c03fca44-8b04-4f10-9082-93a945166c3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035395683s Aug 21 06:17:39.026: INFO: Pod "pod-configmaps-c03fca44-8b04-4f10-9082-93a945166c3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042725342s STEP: Saw pod success Aug 21 06:17:39.026: INFO: Pod "pod-configmaps-c03fca44-8b04-4f10-9082-93a945166c3d" satisfied condition "Succeeded or Failed" Aug 21 06:17:39.032: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-c03fca44-8b04-4f10-9082-93a945166c3d container env-test: STEP: delete the pod Aug 21 06:17:39.076: INFO: Waiting for pod pod-configmaps-c03fca44-8b04-4f10-9082-93a945166c3d to disappear Aug 21 06:17:39.125: INFO: Pod pod-configmaps-c03fca44-8b04-4f10-9082-93a945166c3d no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:17:39.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2717" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":73,"skipped":1129,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:17:39.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:17:54.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6636" for this suite. STEP: Destroying namespace "nsdeletetest-3781" for this suite. Aug 21 06:17:54.521: INFO: Namespace nsdeletetest-3781 was already deleted STEP: Destroying namespace "nsdeletetest-7074" for this suite. • [SLOW TEST:15.371 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":74,"skipped":1135,"failed":0} SSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:17:54.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating pod Aug 21 06:17:58.663: INFO: Pod pod-hostip-b7c9fa20-3c16-453c-a1b7-9bfaf2133f82 has hostIP: 172.18.0.13 [AfterEach] [k8s.io] Pods /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:17:58.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5797" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":75,"skipped":1141,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:17:58.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 21 06:17:58.868: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Aug 21 06:17:58.943: INFO: Number of nodes with available pods: 0 Aug 21 06:17:58.943: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Aug 21 06:17:59.028: INFO: Number of nodes with available pods: 0 Aug 21 06:17:59.029: INFO: Node kali-worker2 is running more than one daemon pod Aug 21 06:18:00.035: INFO: Number of nodes with available pods: 0 Aug 21 06:18:00.035: INFO: Node kali-worker2 is running more than one daemon pod Aug 21 06:18:01.037: INFO: Number of nodes with available pods: 0 Aug 21 06:18:01.037: INFO: Node kali-worker2 is running more than one daemon pod Aug 21 06:18:02.037: INFO: Number of nodes with available pods: 0 Aug 21 06:18:02.038: INFO: Node kali-worker2 is running more than one daemon pod Aug 21 06:18:03.036: INFO: Number of nodes with available pods: 1 Aug 21 06:18:03.036: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Aug 21 06:18:03.126: INFO: Number of nodes with available pods: 1 Aug 21 06:18:03.126: INFO: Number of running nodes: 0, number of available pods: 1 Aug 21 06:18:04.138: INFO: Number of nodes with available pods: 0 Aug 21 06:18:04.138: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Aug 21 06:18:04.216: INFO: Number of nodes with available pods: 0 Aug 21 06:18:04.217: INFO: Node kali-worker2 is running more than one daemon pod Aug 21 06:18:05.224: INFO: Number of nodes with available pods: 0 Aug 21 06:18:05.224: INFO: Node kali-worker2 is running more than one daemon pod Aug 21 06:18:06.225: INFO: Number of nodes with available pods: 0 Aug 21 06:18:06.225: INFO: Node kali-worker2 is running more than one daemon pod Aug 21 06:18:07.225: INFO: Number of nodes with available pods: 0 Aug 21 06:18:07.225: INFO: Node kali-worker2 is running more than one daemon pod Aug 21 06:18:08.224: INFO: Number of nodes with available pods: 0 Aug 21 06:18:08.224: INFO: Node kali-worker2 is running more than one daemon pod Aug 21 06:18:09.224: INFO: Number of nodes with available pods: 0 Aug 21 06:18:09.224: INFO: Node kali-worker2 is running more than one daemon pod Aug 21 06:18:10.225: INFO: Number of nodes with available pods: 0 Aug 21 06:18:10.225: INFO: Node kali-worker2 is running more than one daemon pod Aug 21 06:18:11.225: INFO: Number of nodes with available pods: 0 Aug 21 06:18:11.225: INFO: Node kali-worker2 is running more than one daemon pod Aug 21 06:18:12.224: INFO: Number of nodes with available pods: 0 Aug 21 06:18:12.224: INFO: Node kali-worker2 is running more than one daemon pod Aug 21 06:18:13.224: INFO: Number of nodes with available pods: 0 Aug 21 06:18:13.224: INFO: Node kali-worker2 is running more than one daemon pod Aug 21 06:18:14.240: INFO: Number of nodes with available pods: 0 Aug 21 06:18:14.240: INFO: Node kali-worker2 is running more than one daemon pod Aug 21 06:18:15.224: INFO: Number of nodes with available pods: 0 Aug 21 06:18:15.224: INFO: Node kali-worker2 is running more than one daemon pod Aug 21 06:18:16.223: INFO: Number of nodes with available pods: 0 Aug 21 06:18:16.223: INFO: Node kali-worker2 is running more than one daemon pod Aug 21 06:18:17.224: INFO: Number of nodes with available pods: 0 Aug 21 06:18:17.224: INFO: Node kali-worker2 is running more than one daemon pod Aug 21 06:18:18.246: INFO: Number of nodes with available pods: 0 Aug 21 06:18:18.246: INFO: Node kali-worker2 is running more than one daemon pod Aug 21 06:18:19.251: INFO: Number of nodes with available pods: 0 Aug 21 06:18:19.252: INFO: Node kali-worker2 is running more than one daemon pod Aug 21 06:18:20.225: INFO: Number of nodes with available pods: 0 Aug 21 06:18:20.226: INFO: Node kali-worker2 is running more than one daemon pod Aug 21 06:18:21.241: INFO: Number of nodes with available pods: 0 Aug 21 06:18:21.242: INFO: Node kali-worker2 is running more than one daemon pod Aug 21 06:18:22.225: INFO: Number of nodes with available pods: 0 Aug 21 06:18:22.225: INFO: Node kali-worker2 is running more than one daemon pod Aug 21 06:18:23.224: INFO: Number of nodes with available pods: 1 Aug 21 06:18:23.224: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-517, will wait for the garbage collector to delete the pods Aug 21 06:18:23.296: INFO: Deleting DaemonSet.extensions daemon-set took: 8.381678ms Aug 21 06:18:25.997: INFO: Terminating DaemonSet.extensions daemon-set pods took: 2.700779945s Aug 21 06:18:39.102: INFO: Number of nodes with available pods: 0 Aug 21 06:18:39.102: INFO: Number of running nodes: 0, number of available pods: 0 Aug 21 06:18:39.107: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-517/daemonsets","resourceVersion":"2017789"},"items":null} Aug 21 06:18:39.112: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-517/pods","resourceVersion":"2017789"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:18:39.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-517" for this suite. • [SLOW TEST:40.537 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":76,"skipped":1153,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:18:39.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 21 06:18:39.287: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:18:39.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5356" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":275,"completed":77,"skipped":1192,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:18:39.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Aug 21 06:18:40.039: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:20:21.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1933" for this suite. • [SLOW TEST:101.557 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":78,"skipped":1227,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:20:21.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Aug 21 06:20:21.557: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:22:03.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2033" for this suite. • [SLOW TEST:102.076 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":79,"skipped":1228,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:22:03.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 21 06:22:03.651: INFO: Waiting up to 5m0s for pod "pod-0610a770-1a22-41ba-bb75-ee6a52821307" in namespace "emptydir-5511" to be "Succeeded or Failed" Aug 21 06:22:03.668: INFO: Pod "pod-0610a770-1a22-41ba-bb75-ee6a52821307": Phase="Pending", Reason="", readiness=false. Elapsed: 16.726761ms Aug 21 06:22:05.733: INFO: Pod "pod-0610a770-1a22-41ba-bb75-ee6a52821307": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080915016s Aug 21 06:22:07.740: INFO: Pod "pod-0610a770-1a22-41ba-bb75-ee6a52821307": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088145592s STEP: Saw pod success Aug 21 06:22:07.740: INFO: Pod "pod-0610a770-1a22-41ba-bb75-ee6a52821307" satisfied condition "Succeeded or Failed" Aug 21 06:22:07.745: INFO: Trying to get logs from node kali-worker pod pod-0610a770-1a22-41ba-bb75-ee6a52821307 container test-container: STEP: delete the pod Aug 21 06:22:07.795: INFO: Waiting for pod pod-0610a770-1a22-41ba-bb75-ee6a52821307 to disappear Aug 21 06:22:07.803: INFO: Pod pod-0610a770-1a22-41ba-bb75-ee6a52821307 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:22:07.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5511" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":80,"skipped":1245,"failed":0} SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:22:07.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override all Aug 21 06:22:07.895: INFO: Waiting up to 5m0s for pod "client-containers-51296e2f-824f-4d2d-b391-3e2774d97aac" in namespace "containers-8306" to be "Succeeded or Failed" Aug 21 06:22:07.916: INFO: Pod "client-containers-51296e2f-824f-4d2d-b391-3e2774d97aac": Phase="Pending", Reason="", readiness=false. Elapsed: 20.305862ms Aug 21 06:22:09.923: INFO: Pod "client-containers-51296e2f-824f-4d2d-b391-3e2774d97aac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027493619s Aug 21 06:22:11.930: INFO: Pod "client-containers-51296e2f-824f-4d2d-b391-3e2774d97aac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034024367s STEP: Saw pod success Aug 21 06:22:11.930: INFO: Pod "client-containers-51296e2f-824f-4d2d-b391-3e2774d97aac" satisfied condition "Succeeded or Failed" Aug 21 06:22:11.935: INFO: Trying to get logs from node kali-worker pod client-containers-51296e2f-824f-4d2d-b391-3e2774d97aac container test-container: STEP: delete the pod Aug 21 06:22:11.967: INFO: Waiting for pod client-containers-51296e2f-824f-4d2d-b391-3e2774d97aac to disappear Aug 21 06:22:11.978: INFO: Pod client-containers-51296e2f-824f-4d2d-b391-3e2774d97aac no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:22:11.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8306" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":81,"skipped":1247,"failed":0} SSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:22:11.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Aug 21 06:22:12.120: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the sample API server. Aug 21 06:22:21.026: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Aug 21 06:22:23.420: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587741, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587741, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587741, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587740, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 06:22:25.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587741, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587741, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587741, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587740, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 06:22:28.076: INFO: Waited 631.787304ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:22:28.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-9137" for this suite. • [SLOW TEST:16.629 seconds] [sig-api-machinery] Aggregator /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":82,"skipped":1251,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:22:28.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Aug 21 06:22:29.211: INFO: Waiting up to 5m0s for pod "pod-b686db18-d744-4118-8c63-e51fac6677db" in namespace "emptydir-6843" to be "Succeeded or Failed" Aug 21 06:22:29.404: INFO: Pod "pod-b686db18-d744-4118-8c63-e51fac6677db": Phase="Pending", Reason="", readiness=false. Elapsed: 192.965402ms Aug 21 06:22:31.412: INFO: Pod "pod-b686db18-d744-4118-8c63-e51fac6677db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200240436s Aug 21 06:22:33.419: INFO: Pod "pod-b686db18-d744-4118-8c63-e51fac6677db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.207781585s STEP: Saw pod success Aug 21 06:22:33.420: INFO: Pod "pod-b686db18-d744-4118-8c63-e51fac6677db" satisfied condition "Succeeded or Failed" Aug 21 06:22:33.424: INFO: Trying to get logs from node kali-worker2 pod pod-b686db18-d744-4118-8c63-e51fac6677db container test-container: STEP: delete the pod Aug 21 06:22:33.513: INFO: Waiting for pod pod-b686db18-d744-4118-8c63-e51fac6677db to disappear Aug 21 06:22:33.523: INFO: Pod pod-b686db18-d744-4118-8c63-e51fac6677db no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:22:33.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6843" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":83,"skipped":1251,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:22:33.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:22:44.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1093" for this suite. • [SLOW TEST:11.226 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":84,"skipped":1261,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:22:44.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 21 06:22:44.871: INFO: Waiting up to 5m0s for pod "downwardapi-volume-61b30a2a-dc36-4a73-8727-f2e841056feb" in namespace "projected-4731" to be "Succeeded or Failed" Aug 21 06:22:44.886: INFO: Pod "downwardapi-volume-61b30a2a-dc36-4a73-8727-f2e841056feb": Phase="Pending", Reason="", readiness=false. Elapsed: 14.077763ms Aug 21 06:22:46.956: INFO: Pod "downwardapi-volume-61b30a2a-dc36-4a73-8727-f2e841056feb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084078847s Aug 21 06:22:48.963: INFO: Pod "downwardapi-volume-61b30a2a-dc36-4a73-8727-f2e841056feb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091230948s STEP: Saw pod success Aug 21 06:22:48.963: INFO: Pod "downwardapi-volume-61b30a2a-dc36-4a73-8727-f2e841056feb" satisfied condition "Succeeded or Failed" Aug 21 06:22:48.967: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-61b30a2a-dc36-4a73-8727-f2e841056feb container client-container: STEP: delete the pod Aug 21 06:22:49.051: INFO: Waiting for pod downwardapi-volume-61b30a2a-dc36-4a73-8727-f2e841056feb to disappear Aug 21 06:22:49.056: INFO: Pod downwardapi-volume-61b30a2a-dc36-4a73-8727-f2e841056feb no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:22:49.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4731" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":85,"skipped":1279,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:22:49.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:22:53.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3274" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":86,"skipped":1294,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:22:53.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create and stop a working application [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating all guestbook components Aug 21 06:22:53.333: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Aug 21 06:22:53.333: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7034' Aug 21 06:22:57.890: INFO: stderr: "" Aug 21 06:22:57.890: INFO: stdout: "service/agnhost-slave created\n" Aug 21 06:22:57.891: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Aug 21 06:22:57.891: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7034' Aug 21 06:22:59.371: INFO: stderr: "" Aug 21 06:22:59.371: INFO: stdout: "service/agnhost-master created\n" Aug 21 06:22:59.372: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Aug 21 06:22:59.373: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7034' Aug 21 06:23:00.852: INFO: stderr: "" Aug 21 06:23:00.852: INFO: stdout: "service/frontend created\n" Aug 21 06:23:00.854: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Aug 21 06:23:00.854: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7034' Aug 21 06:23:02.337: INFO: stderr: "" Aug 21 06:23:02.337: INFO: stdout: "deployment.apps/frontend created\n" Aug 21 06:23:02.339: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Aug 21 06:23:02.339: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7034' Aug 21 06:23:04.000: INFO: stderr: "" Aug 21 06:23:04.000: INFO: stdout: "deployment.apps/agnhost-master created\n" Aug 21 06:23:04.001: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Aug 21 06:23:04.002: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7034' Aug 21 06:23:06.408: INFO: stderr: "" Aug 21 06:23:06.409: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Aug 21 06:23:06.409: INFO: Waiting for all frontend pods to be Running. Aug 21 06:23:11.462: INFO: Waiting for frontend to serve content. Aug 21 06:23:11.476: INFO: Trying to add a new entry to the guestbook. Aug 21 06:23:11.537: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Aug 21 06:23:11.559: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7034' Aug 21 06:23:12.676: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 21 06:23:12.677: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Aug 21 06:23:12.678: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7034' Aug 21 06:23:13.795: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 21 06:23:13.795: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Aug 21 06:23:13.796: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7034' Aug 21 06:23:14.945: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 21 06:23:14.946: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 21 06:23:14.947: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7034' Aug 21 06:23:16.022: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 21 06:23:16.023: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 21 06:23:16.024: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7034' Aug 21 06:23:17.169: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 21 06:23:17.170: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Aug 21 06:23:17.171: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7034' Aug 21 06:23:18.408: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 21 06:23:18.408: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:23:18.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7034" for this suite. • [SLOW TEST:25.225 seconds] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310 should create and stop a working application [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":275,"completed":87,"skipped":1309,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:23:18.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 21 06:23:26.807: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 21 06:23:28.827: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587806, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587806, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587806, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587806, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 06:23:30.833: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587806, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587806, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587806, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587806, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 21 06:23:33.867: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 21 06:23:33.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:23:35.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-313" for this suite. STEP: Destroying namespace "webhook-313-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.733 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":88,"skipped":1315,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:23:35.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 21 06:23:40.065: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 21 06:23:42.087: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587820, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587820, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587820, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587819, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 21 06:23:45.131: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:23:45.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4025" for this suite. STEP: Destroying namespace "webhook-4025-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.536 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":89,"skipped":1334,"failed":0} [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:23:45.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 21 06:23:46.128: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-ef4bde4b-a5b3-4392-aebf-af27177a7c21" in namespace "security-context-test-7752" to be "Succeeded or Failed" Aug 21 06:23:46.178: INFO: Pod "busybox-readonly-false-ef4bde4b-a5b3-4392-aebf-af27177a7c21": Phase="Pending", Reason="", readiness=false. Elapsed: 49.825731ms Aug 21 06:23:48.186: INFO: Pod "busybox-readonly-false-ef4bde4b-a5b3-4392-aebf-af27177a7c21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057675928s Aug 21 06:23:50.194: INFO: Pod "busybox-readonly-false-ef4bde4b-a5b3-4392-aebf-af27177a7c21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065656985s Aug 21 06:23:50.194: INFO: Pod "busybox-readonly-false-ef4bde4b-a5b3-4392-aebf-af27177a7c21" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:23:50.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7752" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":90,"skipped":1334,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:23:50.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-bee5cc40-6a5b-4340-a2ff-bcc2fb5406b8 STEP: Creating a pod to test consume secrets Aug 21 06:23:50.299: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7fe4062c-5615-4104-bdc7-1bec4a2aee9b" in namespace "projected-1905" to be "Succeeded or Failed" Aug 21 06:23:50.328: INFO: Pod "pod-projected-secrets-7fe4062c-5615-4104-bdc7-1bec4a2aee9b": Phase="Pending", Reason="", readiness=false. Elapsed: 28.160731ms Aug 21 06:23:52.335: INFO: Pod "pod-projected-secrets-7fe4062c-5615-4104-bdc7-1bec4a2aee9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03583427s Aug 21 06:23:54.344: INFO: Pod "pod-projected-secrets-7fe4062c-5615-4104-bdc7-1bec4a2aee9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044050292s STEP: Saw pod success Aug 21 06:23:54.344: INFO: Pod "pod-projected-secrets-7fe4062c-5615-4104-bdc7-1bec4a2aee9b" satisfied condition "Succeeded or Failed" Aug 21 06:23:54.349: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-7fe4062c-5615-4104-bdc7-1bec4a2aee9b container projected-secret-volume-test: STEP: delete the pod Aug 21 06:23:54.388: INFO: Waiting for pod pod-projected-secrets-7fe4062c-5615-4104-bdc7-1bec4a2aee9b to disappear Aug 21 06:23:54.392: INFO: Pod pod-projected-secrets-7fe4062c-5615-4104-bdc7-1bec4a2aee9b no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:23:54.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1905" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":91,"skipped":1340,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:23:54.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 21 06:24:09.372: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 21 06:24:11.632: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587849, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587849, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587849, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733587849, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 21 06:24:14.706: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:24:26.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4261" for this suite. STEP: Destroying namespace "webhook-4261-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:32.692 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":92,"skipped":1345,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:24:27.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 21 06:24:27.251: INFO: Waiting up to 5m0s for pod "pod-c26e51fe-69fe-443b-b2be-b01f9390888d" in namespace "emptydir-1557" to be "Succeeded or Failed" Aug 21 06:24:27.274: INFO: Pod "pod-c26e51fe-69fe-443b-b2be-b01f9390888d": Phase="Pending", Reason="", readiness=false. Elapsed: 22.868979ms Aug 21 06:24:29.281: INFO: Pod "pod-c26e51fe-69fe-443b-b2be-b01f9390888d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029786091s Aug 21 06:24:31.288: INFO: Pod "pod-c26e51fe-69fe-443b-b2be-b01f9390888d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036191962s STEP: Saw pod success Aug 21 06:24:31.288: INFO: Pod "pod-c26e51fe-69fe-443b-b2be-b01f9390888d" satisfied condition "Succeeded or Failed" Aug 21 06:24:31.293: INFO: Trying to get logs from node kali-worker pod pod-c26e51fe-69fe-443b-b2be-b01f9390888d container test-container: STEP: delete the pod Aug 21 06:24:31.402: INFO: Waiting for pod pod-c26e51fe-69fe-443b-b2be-b01f9390888d to disappear Aug 21 06:24:31.423: INFO: Pod pod-c26e51fe-69fe-443b-b2be-b01f9390888d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:24:31.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1557" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":93,"skipped":1358,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:24:31.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:24:48.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4040" for this suite. • [SLOW TEST:17.183 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":94,"skipped":1386,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:24:48.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0821 06:25:29.019028 10 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 21 06:25:29.021: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:25:29.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1678" for this suite. • [SLOW TEST:40.372 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":95,"skipped":1442,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:25:29.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 21 06:25:29.123: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ecfac7b2-f8cb-46c6-86a1-f411e5381ce7" in namespace "downward-api-7232" to be "Succeeded or Failed" Aug 21 06:25:29.181: INFO: Pod "downwardapi-volume-ecfac7b2-f8cb-46c6-86a1-f411e5381ce7": Phase="Pending", Reason="", readiness=false. Elapsed: 57.540772ms Aug 21 06:25:31.235: INFO: Pod "downwardapi-volume-ecfac7b2-f8cb-46c6-86a1-f411e5381ce7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111012676s Aug 21 06:25:33.242: INFO: Pod "downwardapi-volume-ecfac7b2-f8cb-46c6-86a1-f411e5381ce7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.118724125s STEP: Saw pod success Aug 21 06:25:33.243: INFO: Pod "downwardapi-volume-ecfac7b2-f8cb-46c6-86a1-f411e5381ce7" satisfied condition "Succeeded or Failed" Aug 21 06:25:33.249: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-ecfac7b2-f8cb-46c6-86a1-f411e5381ce7 container client-container: STEP: delete the pod Aug 21 06:25:33.311: INFO: Waiting for pod downwardapi-volume-ecfac7b2-f8cb-46c6-86a1-f411e5381ce7 to disappear Aug 21 06:25:33.340: INFO: Pod downwardapi-volume-ecfac7b2-f8cb-46c6-86a1-f411e5381ce7 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:25:33.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7232" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":96,"skipped":1445,"failed":0} SSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:25:33.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Aug 21 06:25:33.486: INFO: Waiting up to 5m0s for pod "downward-api-33fb47d3-b07f-4440-ba65-18f2740464d2" in namespace "downward-api-1627" to be "Succeeded or Failed" Aug 21 06:25:33.519: INFO: Pod "downward-api-33fb47d3-b07f-4440-ba65-18f2740464d2": Phase="Pending", Reason="", readiness=false. Elapsed: 32.822956ms Aug 21 06:25:35.562: INFO: Pod "downward-api-33fb47d3-b07f-4440-ba65-18f2740464d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07557498s Aug 21 06:25:37.829: INFO: Pod "downward-api-33fb47d3-b07f-4440-ba65-18f2740464d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.342563556s Aug 21 06:25:39.838: INFO: Pod "downward-api-33fb47d3-b07f-4440-ba65-18f2740464d2": Phase="Running", Reason="", readiness=true. Elapsed: 6.35102087s Aug 21 06:25:41.848: INFO: Pod "downward-api-33fb47d3-b07f-4440-ba65-18f2740464d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.361632659s STEP: Saw pod success Aug 21 06:25:41.849: INFO: Pod "downward-api-33fb47d3-b07f-4440-ba65-18f2740464d2" satisfied condition "Succeeded or Failed" Aug 21 06:25:41.854: INFO: Trying to get logs from node kali-worker pod downward-api-33fb47d3-b07f-4440-ba65-18f2740464d2 container dapi-container: STEP: delete the pod Aug 21 06:25:41.886: INFO: Waiting for pod downward-api-33fb47d3-b07f-4440-ba65-18f2740464d2 to disappear Aug 21 06:25:41.904: INFO: Pod downward-api-33fb47d3-b07f-4440-ba65-18f2740464d2 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 06:25:41.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1627" for this suite. • [SLOW TEST:8.563 seconds] [sig-node] Downward API /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":97,"skipped":1453,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 06:25:41.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 21 06:25:42.018: INFO: (0) /api/v1/nodes/kali-worker2/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: getting the auto-created API token
STEP: reading a file in the container
Aug 21 06:25:46.851: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3372 pod-service-account-f2da0ac1-9a77-4761-8de9-c19cde64cbdc -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Aug 21 06:25:48.181: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3372 pod-service-account-f2da0ac1-9a77-4761-8de9-c19cde64cbdc -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Aug 21 06:25:49.508: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3372 pod-service-account-f2da0ac1-9a77-4761-8de9-c19cde64cbdc -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:25:50.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3372" for this suite.

• [SLOW TEST:8.815 seconds]
[sig-auth] ServiceAccounts
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":275,"completed":99,"skipped":1490,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:25:50.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-8982
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-8982
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8982
Aug 21 06:25:51.090: INFO: Found 0 stateful pods, waiting for 1
Aug 21 06:26:01.100: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Aug 21 06:26:01.107: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 21 06:26:02.503: INFO: stderr: "I0821 06:26:02.374695    1885 log.go:172] (0x28a07e0) (0x28a0930) Create stream\nI0821 06:26:02.378318    1885 log.go:172] (0x28a07e0) (0x28a0930) Stream added, broadcasting: 1\nI0821 06:26:02.395519    1885 log.go:172] (0x28a07e0) Reply frame received for 1\nI0821 06:26:02.395928    1885 log.go:172] (0x28a07e0) (0x29bc0e0) Create stream\nI0821 06:26:02.395992    1885 log.go:172] (0x28a07e0) (0x29bc0e0) Stream added, broadcasting: 3\nI0821 06:26:02.397212    1885 log.go:172] (0x28a07e0) Reply frame received for 3\nI0821 06:26:02.397495    1885 log.go:172] (0x28a07e0) (0x2c191f0) Create stream\nI0821 06:26:02.397572    1885 log.go:172] (0x28a07e0) (0x2c191f0) Stream added, broadcasting: 5\nI0821 06:26:02.398644    1885 log.go:172] (0x28a07e0) Reply frame received for 5\nI0821 06:26:02.449078    1885 log.go:172] (0x28a07e0) Data frame received for 5\nI0821 06:26:02.449381    1885 log.go:172] (0x2c191f0) (5) Data frame handling\nI0821 06:26:02.449841    1885 log.go:172] (0x2c191f0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 06:26:02.477614    1885 log.go:172] (0x28a07e0) Data frame received for 3\nI0821 06:26:02.477888    1885 log.go:172] (0x29bc0e0) (3) Data frame handling\nI0821 06:26:02.478080    1885 log.go:172] (0x28a07e0) Data frame received for 5\nI0821 06:26:02.478286    1885 log.go:172] (0x2c191f0) (5) Data frame handling\nI0821 06:26:02.478575    1885 log.go:172] (0x29bc0e0) (3) Data frame sent\nI0821 06:26:02.478816    1885 log.go:172] (0x28a07e0) Data frame received for 3\nI0821 06:26:02.478938    1885 log.go:172] (0x29bc0e0) (3) Data frame handling\nI0821 06:26:02.480825    1885 log.go:172] (0x28a07e0) Data frame received for 1\nI0821 06:26:02.480985    1885 log.go:172] (0x28a0930) (1) Data frame handling\nI0821 06:26:02.481117    1885 log.go:172] (0x28a0930) (1) Data frame sent\nI0821 06:26:02.482318    1885 log.go:172] (0x28a07e0) (0x28a0930) Stream removed, broadcasting: 1\nI0821 06:26:02.485616    1885 log.go:172] (0x28a07e0) Go away received\nI0821 06:26:02.488475    1885 log.go:172] (0x28a07e0) (0x28a0930) Stream removed, broadcasting: 1\nI0821 06:26:02.489027    1885 log.go:172] (0x28a07e0) (0x29bc0e0) Stream removed, broadcasting: 3\nI0821 06:26:02.489349    1885 log.go:172] (0x28a07e0) (0x2c191f0) Stream removed, broadcasting: 5\n"
Aug 21 06:26:02.504: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 21 06:26:02.504: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 21 06:26:02.510: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 21 06:26:12.519: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 21 06:26:12.520: INFO: Waiting for statefulset status.replicas updated to 0
Aug 21 06:26:12.561: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999979215s
Aug 21 06:26:13.570: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.976350211s
Aug 21 06:26:14.578: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.967810876s
Aug 21 06:26:15.586: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.959730274s
Aug 21 06:26:16.594: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.951485779s
Aug 21 06:26:17.601: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.943510088s
Aug 21 06:26:18.610: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.936414859s
Aug 21 06:26:19.618: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.927935447s
Aug 21 06:26:20.625: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.919556021s
Aug 21 06:26:21.631: INFO: Verifying statefulset ss doesn't scale past 1 for another 912.3652ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8982
Aug 21 06:26:22.639: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 06:26:24.025: INFO: stderr: "I0821 06:26:23.887640    1910 log.go:172] (0x308aaf0) (0x308ab60) Create stream\nI0821 06:26:23.892439    1910 log.go:172] (0x308aaf0) (0x308ab60) Stream added, broadcasting: 1\nI0821 06:26:23.911908    1910 log.go:172] (0x308aaf0) Reply frame received for 1\nI0821 06:26:23.912426    1910 log.go:172] (0x308aaf0) (0x2b26070) Create stream\nI0821 06:26:23.912492    1910 log.go:172] (0x308aaf0) (0x2b26070) Stream added, broadcasting: 3\nI0821 06:26:23.913754    1910 log.go:172] (0x308aaf0) Reply frame received for 3\nI0821 06:26:23.913957    1910 log.go:172] (0x308aaf0) (0x2a981c0) Create stream\nI0821 06:26:23.914020    1910 log.go:172] (0x308aaf0) (0x2a981c0) Stream added, broadcasting: 5\nI0821 06:26:23.915070    1910 log.go:172] (0x308aaf0) Reply frame received for 5\nI0821 06:26:24.004493    1910 log.go:172] (0x308aaf0) Data frame received for 5\nI0821 06:26:24.004850    1910 log.go:172] (0x308aaf0) Data frame received for 1\nI0821 06:26:24.005115    1910 log.go:172] (0x308aaf0) Data frame received for 3\nI0821 06:26:24.005244    1910 log.go:172] (0x2b26070) (3) Data frame handling\nI0821 06:26:24.005333    1910 log.go:172] (0x308ab60) (1) Data frame handling\nI0821 06:26:24.005533    1910 log.go:172] (0x2a981c0) (5) Data frame handling\nI0821 06:26:24.007125    1910 log.go:172] (0x2a981c0) (5) Data frame sent\nI0821 06:26:24.007360    1910 log.go:172] (0x308ab60) (1) Data frame sent\nI0821 06:26:24.007741    1910 log.go:172] (0x2b26070) (3) Data frame sent\nI0821 06:26:24.007843    1910 log.go:172] (0x308aaf0) Data frame received for 3\nI0821 06:26:24.007944    1910 log.go:172] (0x2b26070) (3) Data frame handling\nI0821 06:26:24.008027    1910 log.go:172] (0x308aaf0) Data frame received for 5\nI0821 06:26:24.008098    1910 log.go:172] (0x2a981c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0821 06:26:24.009336    1910 log.go:172] (0x308aaf0) (0x308ab60) Stream removed, broadcasting: 1\nI0821 06:26:24.010145    1910 log.go:172] (0x308aaf0) Go away received\nI0821 06:26:24.013546    1910 log.go:172] (0x308aaf0) (0x308ab60) Stream removed, broadcasting: 1\nI0821 06:26:24.013843    1910 log.go:172] (0x308aaf0) (0x2b26070) Stream removed, broadcasting: 3\nI0821 06:26:24.014040    1910 log.go:172] (0x308aaf0) (0x2a981c0) Stream removed, broadcasting: 5\n"
Aug 21 06:26:24.026: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 21 06:26:24.026: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 21 06:26:24.033: INFO: Found 1 stateful pods, waiting for 3
Aug 21 06:26:34.044: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 21 06:26:34.044: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 21 06:26:34.044: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Aug 21 06:26:34.062: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 21 06:26:35.458: INFO: stderr: "I0821 06:26:35.350781    1933 log.go:172] (0x2b74380) (0x2b743f0) Create stream\nI0821 06:26:35.353402    1933 log.go:172] (0x2b74380) (0x2b743f0) Stream added, broadcasting: 1\nI0821 06:26:35.369816    1933 log.go:172] (0x2b74380) Reply frame received for 1\nI0821 06:26:35.370286    1933 log.go:172] (0x2b74380) (0x28947e0) Create stream\nI0821 06:26:35.370355    1933 log.go:172] (0x2b74380) (0x28947e0) Stream added, broadcasting: 3\nI0821 06:26:35.371651    1933 log.go:172] (0x2b74380) Reply frame received for 3\nI0821 06:26:35.371938    1933 log.go:172] (0x2b74380) (0x2c2ea80) Create stream\nI0821 06:26:35.372011    1933 log.go:172] (0x2b74380) (0x2c2ea80) Stream added, broadcasting: 5\nI0821 06:26:35.373267    1933 log.go:172] (0x2b74380) Reply frame received for 5\nI0821 06:26:35.439891    1933 log.go:172] (0x2b74380) Data frame received for 3\nI0821 06:26:35.440346    1933 log.go:172] (0x2b74380) Data frame received for 5\nI0821 06:26:35.440658    1933 log.go:172] (0x2c2ea80) (5) Data frame handling\nI0821 06:26:35.441069    1933 log.go:172] (0x28947e0) (3) Data frame handling\nI0821 06:26:35.441499    1933 log.go:172] (0x2b74380) Data frame received for 1\nI0821 06:26:35.441584    1933 log.go:172] (0x2b743f0) (1) Data frame handling\nI0821 06:26:35.442444    1933 log.go:172] (0x2b743f0) (1) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 06:26:35.442974    1933 log.go:172] (0x2c2ea80) (5) Data frame sent\nI0821 06:26:35.443068    1933 log.go:172] (0x2b74380) Data frame received for 5\nI0821 06:26:35.443130    1933 log.go:172] (0x2c2ea80) (5) Data frame handling\nI0821 06:26:35.443289    1933 log.go:172] (0x28947e0) (3) Data frame sent\nI0821 06:26:35.443363    1933 log.go:172] (0x2b74380) Data frame received for 3\nI0821 06:26:35.443419    1933 log.go:172] (0x28947e0) (3) Data frame handling\nI0821 06:26:35.445333    1933 log.go:172] (0x2b74380) (0x2b743f0) Stream removed, broadcasting: 1\nI0821 06:26:35.446883    1933 log.go:172] (0x2b74380) Go away received\nI0821 06:26:35.449704    1933 log.go:172] (0x2b74380) (0x2b743f0) Stream removed, broadcasting: 1\nI0821 06:26:35.449986    1933 log.go:172] (0x2b74380) (0x28947e0) Stream removed, broadcasting: 3\nI0821 06:26:35.450218    1933 log.go:172] (0x2b74380) (0x2c2ea80) Stream removed, broadcasting: 5\n"
Aug 21 06:26:35.460: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 21 06:26:35.460: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 21 06:26:35.460: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 21 06:26:36.896: INFO: stderr: "I0821 06:26:36.746825    1956 log.go:172] (0x2cc8070) (0x2cc80e0) Create stream\nI0821 06:26:36.748590    1956 log.go:172] (0x2cc8070) (0x2cc80e0) Stream added, broadcasting: 1\nI0821 06:26:36.762136    1956 log.go:172] (0x2cc8070) Reply frame received for 1\nI0821 06:26:36.762839    1956 log.go:172] (0x2cc8070) (0x2aa0230) Create stream\nI0821 06:26:36.762937    1956 log.go:172] (0x2cc8070) (0x2aa0230) Stream added, broadcasting: 3\nI0821 06:26:36.765064    1956 log.go:172] (0x2cc8070) Reply frame received for 3\nI0821 06:26:36.765735    1956 log.go:172] (0x2cc8070) (0x2c3c4d0) Create stream\nI0821 06:26:36.765895    1956 log.go:172] (0x2cc8070) (0x2c3c4d0) Stream added, broadcasting: 5\nI0821 06:26:36.767668    1956 log.go:172] (0x2cc8070) Reply frame received for 5\nI0821 06:26:36.834633    1956 log.go:172] (0x2cc8070) Data frame received for 5\nI0821 06:26:36.834902    1956 log.go:172] (0x2c3c4d0) (5) Data frame handling\nI0821 06:26:36.835352    1956 log.go:172] (0x2c3c4d0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 06:26:36.873507    1956 log.go:172] (0x2cc8070) Data frame received for 3\nI0821 06:26:36.873809    1956 log.go:172] (0x2aa0230) (3) Data frame handling\nI0821 06:26:36.874158    1956 log.go:172] (0x2aa0230) (3) Data frame sent\nI0821 06:26:36.874419    1956 log.go:172] (0x2cc8070) Data frame received for 3\nI0821 06:26:36.874640    1956 log.go:172] (0x2aa0230) (3) Data frame handling\nI0821 06:26:36.874873    1956 log.go:172] (0x2cc8070) Data frame received for 5\nI0821 06:26:36.875193    1956 log.go:172] (0x2c3c4d0) (5) Data frame handling\nI0821 06:26:36.875384    1956 log.go:172] (0x2cc8070) Data frame received for 1\nI0821 06:26:36.875542    1956 log.go:172] (0x2cc80e0) (1) Data frame handling\nI0821 06:26:36.875694    1956 log.go:172] (0x2cc80e0) (1) Data frame sent\nI0821 06:26:36.877499    1956 log.go:172] (0x2cc8070) (0x2cc80e0) Stream removed, broadcasting: 1\nI0821 06:26:36.879372    1956 log.go:172] (0x2cc8070) Go away received\nI0821 06:26:36.883379    1956 log.go:172] (0x2cc8070) (0x2cc80e0) Stream removed, broadcasting: 1\nI0821 06:26:36.883636    1956 log.go:172] (0x2cc8070) (0x2aa0230) Stream removed, broadcasting: 3\nI0821 06:26:36.883838    1956 log.go:172] (0x2cc8070) (0x2c3c4d0) Stream removed, broadcasting: 5\n"
Aug 21 06:26:36.896: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 21 06:26:36.896: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 21 06:26:36.897: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 21 06:26:38.343: INFO: stderr: "I0821 06:26:38.175426    1980 log.go:172] (0x2d60000) (0x2d60070) Create stream\nI0821 06:26:38.178852    1980 log.go:172] (0x2d60000) (0x2d60070) Stream added, broadcasting: 1\nI0821 06:26:38.193097    1980 log.go:172] (0x2d60000) Reply frame received for 1\nI0821 06:26:38.193730    1980 log.go:172] (0x2d60000) (0x2c28460) Create stream\nI0821 06:26:38.193815    1980 log.go:172] (0x2d60000) (0x2c28460) Stream added, broadcasting: 3\nI0821 06:26:38.195061    1980 log.go:172] (0x2d60000) Reply frame received for 3\nI0821 06:26:38.195294    1980 log.go:172] (0x2d60000) (0x2a92380) Create stream\nI0821 06:26:38.195361    1980 log.go:172] (0x2d60000) (0x2a92380) Stream added, broadcasting: 5\nI0821 06:26:38.196332    1980 log.go:172] (0x2d60000) Reply frame received for 5\nI0821 06:26:38.285495    1980 log.go:172] (0x2d60000) Data frame received for 5\nI0821 06:26:38.285821    1980 log.go:172] (0x2a92380) (5) Data frame handling\nI0821 06:26:38.286382    1980 log.go:172] (0x2a92380) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 06:26:38.323704    1980 log.go:172] (0x2d60000) Data frame received for 3\nI0821 06:26:38.323977    1980 log.go:172] (0x2c28460) (3) Data frame handling\nI0821 06:26:38.324164    1980 log.go:172] (0x2d60000) Data frame received for 5\nI0821 06:26:38.324468    1980 log.go:172] (0x2a92380) (5) Data frame handling\nI0821 06:26:38.324708    1980 log.go:172] (0x2c28460) (3) Data frame sent\nI0821 06:26:38.324884    1980 log.go:172] (0x2d60000) Data frame received for 3\nI0821 06:26:38.324954    1980 log.go:172] (0x2c28460) (3) Data frame handling\nI0821 06:26:38.325882    1980 log.go:172] (0x2d60000) Data frame received for 1\nI0821 06:26:38.326071    1980 log.go:172] (0x2d60070) (1) Data frame handling\nI0821 06:26:38.326324    1980 log.go:172] (0x2d60070) (1) Data frame sent\nI0821 06:26:38.328135    1980 log.go:172] (0x2d60000) (0x2d60070) Stream removed, broadcasting: 1\nI0821 06:26:38.329085    1980 log.go:172] (0x2d60000) Go away received\nI0821 06:26:38.332253    1980 log.go:172] (0x2d60000) (0x2d60070) Stream removed, broadcasting: 1\nI0821 06:26:38.332457    1980 log.go:172] (0x2d60000) (0x2c28460) Stream removed, broadcasting: 3\nI0821 06:26:38.332630    1980 log.go:172] (0x2d60000) (0x2a92380) Stream removed, broadcasting: 5\n"
Aug 21 06:26:38.344: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 21 06:26:38.344: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 21 06:26:38.344: INFO: Waiting for statefulset status.replicas updated to 0
Aug 21 06:26:38.350: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Aug 21 06:26:48.362: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 21 06:26:48.363: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 21 06:26:48.363: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 21 06:26:48.397: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999986182s
Aug 21 06:26:49.407: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.976648733s
Aug 21 06:26:50.417: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.965580202s
Aug 21 06:26:51.440: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.956507593s
Aug 21 06:26:52.452: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.933399666s
Aug 21 06:26:53.461: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.920865998s
Aug 21 06:26:54.473: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.911912227s
Aug 21 06:26:55.485: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.899717894s
Aug 21 06:26:56.495: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.888173577s
Aug 21 06:26:57.504: INFO: Verifying statefulset ss doesn't scale past 3 for another 878.096092ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8982
Aug 21 06:26:58.586: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 06:26:59.967: INFO: stderr: "I0821 06:26:59.854311    2003 log.go:172] (0x2f8e000) (0x2f8e070) Create stream\nI0821 06:26:59.856916    2003 log.go:172] (0x2f8e000) (0x2f8e070) Stream added, broadcasting: 1\nI0821 06:26:59.869274    2003 log.go:172] (0x2f8e000) Reply frame received for 1\nI0821 06:26:59.870044    2003 log.go:172] (0x2f8e000) (0x2c20070) Create stream\nI0821 06:26:59.870176    2003 log.go:172] (0x2f8e000) (0x2c20070) Stream added, broadcasting: 3\nI0821 06:26:59.872058    2003 log.go:172] (0x2f8e000) Reply frame received for 3\nI0821 06:26:59.872549    2003 log.go:172] (0x2f8e000) (0x2f8e230) Create stream\nI0821 06:26:59.872673    2003 log.go:172] (0x2f8e000) (0x2f8e230) Stream added, broadcasting: 5\nI0821 06:26:59.874613    2003 log.go:172] (0x2f8e000) Reply frame received for 5\nI0821 06:26:59.945769    2003 log.go:172] (0x2f8e000) Data frame received for 5\nI0821 06:26:59.946011    2003 log.go:172] (0x2f8e000) Data frame received for 1\nI0821 06:26:59.946260    2003 log.go:172] (0x2f8e000) Data frame received for 3\nI0821 06:26:59.946436    2003 log.go:172] (0x2c20070) (3) Data frame handling\nI0821 06:26:59.946651    2003 log.go:172] (0x2f8e070) (1) Data frame handling\nI0821 06:26:59.947330    2003 log.go:172] (0x2f8e070) (1) Data frame sent\nI0821 06:26:59.947673    2003 log.go:172] (0x2c20070) (3) Data frame sent\nI0821 06:26:59.947947    2003 log.go:172] (0x2f8e000) Data frame received for 3\nI0821 06:26:59.948053    2003 log.go:172] (0x2c20070) (3) Data frame handling\nI0821 06:26:59.948157    2003 log.go:172] (0x2f8e230) (5) Data frame handling\nI0821 06:26:59.948328    2003 log.go:172] (0x2f8e230) (5) Data frame sent\nI0821 06:26:59.948471    2003 log.go:172] (0x2f8e000) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0821 06:26:59.950394    2003 log.go:172] (0x2f8e000) (0x2f8e070) Stream removed, broadcasting: 1\nI0821 06:26:59.951533    2003 log.go:172] (0x2f8e230) (5) Data frame handling\nI0821 06:26:59.951813    2003 log.go:172] (0x2f8e000) Go away received\nI0821 06:26:59.956408    2003 log.go:172] (0x2f8e000) (0x2f8e070) Stream removed, broadcasting: 1\nI0821 06:26:59.956717    2003 log.go:172] (0x2f8e000) (0x2c20070) Stream removed, broadcasting: 3\nI0821 06:26:59.957081    2003 log.go:172] (0x2f8e000) (0x2f8e230) Stream removed, broadcasting: 5\n"
Aug 21 06:26:59.968: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 21 06:26:59.968: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 21 06:26:59.968: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 06:27:01.375: INFO: stderr: "I0821 06:27:01.243270    2026 log.go:172] (0x2d62620) (0x2e2c000) Create stream\nI0821 06:27:01.250728    2026 log.go:172] (0x2d62620) (0x2e2c000) Stream added, broadcasting: 1\nI0821 06:27:01.262736    2026 log.go:172] (0x2d62620) Reply frame received for 1\nI0821 06:27:01.263497    2026 log.go:172] (0x2d62620) (0x2ab0230) Create stream\nI0821 06:27:01.263622    2026 log.go:172] (0x2d62620) (0x2ab0230) Stream added, broadcasting: 3\nI0821 06:27:01.265326    2026 log.go:172] (0x2d62620) Reply frame received for 3\nI0821 06:27:01.265584    2026 log.go:172] (0x2d62620) (0x2c58a10) Create stream\nI0821 06:27:01.265653    2026 log.go:172] (0x2d62620) (0x2c58a10) Stream added, broadcasting: 5\nI0821 06:27:01.266689    2026 log.go:172] (0x2d62620) Reply frame received for 5\nI0821 06:27:01.356357    2026 log.go:172] (0x2d62620) Data frame received for 3\nI0821 06:27:01.356859    2026 log.go:172] (0x2d62620) Data frame received for 5\nI0821 06:27:01.357026    2026 log.go:172] (0x2c58a10) (5) Data frame handling\nI0821 06:27:01.357467    2026 log.go:172] (0x2d62620) Data frame received for 1\nI0821 06:27:01.357609    2026 log.go:172] (0x2e2c000) (1) Data frame handling\nI0821 06:27:01.357719    2026 log.go:172] (0x2ab0230) (3) Data frame handling\nI0821 06:27:01.358133    2026 log.go:172] (0x2e2c000) (1) Data frame sent\nI0821 06:27:01.358233    2026 log.go:172] (0x2ab0230) (3) Data frame sent\nI0821 06:27:01.358484    2026 log.go:172] (0x2c58a10) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0821 06:27:01.358875    2026 log.go:172] (0x2d62620) Data frame received for 3\nI0821 06:27:01.358978    2026 log.go:172] (0x2ab0230) (3) Data frame handling\nI0821 06:27:01.359418    2026 log.go:172] (0x2d62620) Data frame received for 5\nI0821 06:27:01.359509    2026 log.go:172] (0x2c58a10) (5) Data frame handling\nI0821 06:27:01.360453    2026 log.go:172] (0x2d62620) (0x2e2c000) Stream removed, broadcasting: 1\nI0821 06:27:01.362499    2026 log.go:172] (0x2d62620) Go away received\nI0821 06:27:01.365351    2026 log.go:172] (0x2d62620) (0x2e2c000) Stream removed, broadcasting: 1\nI0821 06:27:01.365544    2026 log.go:172] (0x2d62620) (0x2ab0230) Stream removed, broadcasting: 3\nI0821 06:27:01.365711    2026 log.go:172] (0x2d62620) (0x2c58a10) Stream removed, broadcasting: 5\n"
Aug 21 06:27:01.376: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 21 06:27:01.376: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 21 06:27:01.376: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 06:27:02.830: INFO: rc: 1
Aug 21 06:27:02.831: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
I0821 06:27:02.733766    2050 log.go:172] (0x2f8fc70) (0x2f8fce0) Create stream
I0821 06:27:02.735949    2050 log.go:172] (0x2f8fc70) (0x2f8fce0) Stream added, broadcasting: 1
I0821 06:27:02.752031    2050 log.go:172] (0x2f8fc70) Reply frame received for 1
I0821 06:27:02.752550    2050 log.go:172] (0x2f8fc70) (0x2dba0e0) Create stream
I0821 06:27:02.752636    2050 log.go:172] (0x2f8fc70) (0x2dba0e0) Stream added, broadcasting: 3
I0821 06:27:02.754270    2050 log.go:172] (0x2f8fc70) Reply frame received for 3
I0821 06:27:02.754594    2050 log.go:172] (0x2f8fc70) (0x2dba620) Create stream
I0821 06:27:02.754684    2050 log.go:172] (0x2f8fc70) (0x2dba620) Stream added, broadcasting: 5
I0821 06:27:02.755986    2050 log.go:172] (0x2f8fc70) Reply frame received for 5
I0821 06:27:02.805945    2050 log.go:172] (0x2f8fc70) Data frame received for 1
I0821 06:27:02.806444    2050 log.go:172] (0x2f8fce0) (1) Data frame handling
I0821 06:27:02.807463    2050 log.go:172] (0x2f8fce0) (1) Data frame sent
I0821 06:27:02.809703    2050 log.go:172] (0x2f8fc70) (0x2f8fce0) Stream removed, broadcasting: 1
I0821 06:27:02.810401    2050 log.go:172] (0x2f8fc70) (0x2dba0e0) Stream removed, broadcasting: 3
I0821 06:27:02.810977    2050 log.go:172] (0x2f8fc70) (0x2dba620) Stream removed, broadcasting: 5
I0821 06:27:02.813746    2050 log.go:172] (0x2f8fc70) Go away received
I0821 06:27:02.816937    2050 log.go:172] (0x2f8fc70) (0x2f8fce0) Stream removed, broadcasting: 1
I0821 06:27:02.817147    2050 log.go:172] (0x2f8fc70) (0x2dba0e0) Stream removed, broadcasting: 3
I0821 06:27:02.817229    2050 log.go:172] (0x2f8fc70) (0x2dba620) Stream removed, broadcasting: 5
error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "988f211312745906181264f6c8488355faf1ff10e6e65804f6278f9f60e610ca": task 1aeba3f2c35277bd694ff1c7a19e2831594fdfa0513721501b53842e19e36b98 not found: not found

error:
exit status 1
Aug 21 06:27:12.832: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 06:27:13.951: INFO: rc: 1
Aug 21 06:27:13.951: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 06:27:23.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 06:27:25.068: INFO: rc: 1
Aug 21 06:27:25.068: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 06:27:35.070: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 06:27:36.172: INFO: rc: 1
Aug 21 06:27:36.173: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 06:27:46.174: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 06:27:47.278: INFO: rc: 1
Aug 21 06:27:47.279: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 06:27:57.279: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 06:27:58.379: INFO: rc: 1
Aug 21 06:27:58.380: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 06:28:08.381: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 06:28:09.519: INFO: rc: 1
Aug 21 06:28:09.520: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 06:28:19.521: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 06:28:20.603: INFO: rc: 1
Aug 21 06:28:20.604: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 06:28:30.605: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 06:28:31.725: INFO: rc: 1
Aug 21 06:28:31.725: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 06:28:41.726: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 06:28:42.817: INFO: rc: 1
Aug 21 06:28:42.818: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 06:28:52.819: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 06:28:53.989: INFO: rc: 1
Aug 21 06:28:53.989: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 06:29:03.990: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 06:29:05.088: INFO: rc: 1
Aug 21 06:29:05.088: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 06:29:15.089: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 06:29:16.219: INFO: rc: 1
Aug 21 06:29:16.220: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 06:29:26.221: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 06:29:27.312: INFO: rc: 1
Aug 21 06:29:27.312: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 06:29:37.313: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 06:29:38.415: INFO: rc: 1
Aug 21 06:29:38.416: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 06:29:48.417: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 06:29:49.537: INFO: rc: 1
Aug 21 06:29:49.537: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 06:29:59.539: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 06:30:00.662: INFO: rc: 1
Aug 21 06:30:00.662: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 06:30:10.663: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 06:30:11.775: INFO: rc: 1
Aug 21 06:30:11.776: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 06:30:21.777: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 06:30:22.958: INFO: rc: 1
Aug 21 06:30:22.958: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 06:30:32.959: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 06:30:34.065: INFO: rc: 1
Aug 21 06:30:34.066: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 06:30:44.067: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 06:30:45.151: INFO: rc: 1
Aug 21 06:30:45.152: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 06:30:55.153: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 06:30:56.250: INFO: rc: 1
Aug 21 06:30:56.250: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 06:31:06.251: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 06:31:07.370: INFO: rc: 1
Aug 21 06:31:07.370: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 06:31:17.371: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 06:31:18.467: INFO: rc: 1
Aug 21 06:31:18.468: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 06:31:28.468: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 06:31:29.575: INFO: rc: 1
Aug 21 06:31:29.575: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 06:31:39.576: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 06:31:40.695: INFO: rc: 1
Aug 21 06:31:40.695: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 06:31:50.697: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 06:31:51.830: INFO: rc: 1
Aug 21 06:31:51.830: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 06:32:01.831: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 06:32:02.971: INFO: rc: 1
Aug 21 06:32:02.972: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: 
Aug 21 06:32:02.972: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug 21 06:32:03.003: INFO: Deleting all statefulset in ns statefulset-8982
Aug 21 06:32:03.007: INFO: Scaling statefulset ss to 0
Aug 21 06:32:03.022: INFO: Waiting for statefulset status.replicas updated to 0
Aug 21 06:32:03.026: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:32:03.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8982" for this suite.

• [SLOW TEST:372.099 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":100,"skipped":1502,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:32:03.070: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Aug 21 06:32:03.173: INFO: Waiting up to 5m0s for pod "downward-api-54b20793-12c0-4df4-b6c1-5f3ae90b2d38" in namespace "downward-api-9765" to be "Succeeded or Failed"
Aug 21 06:32:03.200: INFO: Pod "downward-api-54b20793-12c0-4df4-b6c1-5f3ae90b2d38": Phase="Pending", Reason="", readiness=false. Elapsed: 26.169506ms
Aug 21 06:32:05.216: INFO: Pod "downward-api-54b20793-12c0-4df4-b6c1-5f3ae90b2d38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042023969s
Aug 21 06:32:07.225: INFO: Pod "downward-api-54b20793-12c0-4df4-b6c1-5f3ae90b2d38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051120529s
STEP: Saw pod success
Aug 21 06:32:07.225: INFO: Pod "downward-api-54b20793-12c0-4df4-b6c1-5f3ae90b2d38" satisfied condition "Succeeded or Failed"
Aug 21 06:32:07.231: INFO: Trying to get logs from node kali-worker pod downward-api-54b20793-12c0-4df4-b6c1-5f3ae90b2d38 container dapi-container: 
STEP: delete the pod
Aug 21 06:32:07.300: INFO: Waiting for pod downward-api-54b20793-12c0-4df4-b6c1-5f3ae90b2d38 to disappear
Aug 21 06:32:07.306: INFO: Pod downward-api-54b20793-12c0-4df4-b6c1-5f3ae90b2d38 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:32:07.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9765" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":101,"skipped":1601,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:32:07.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 06:32:11.464: INFO: Waiting up to 5m0s for pod "client-envvars-78dc93ae-73f6-4cae-9ac6-830df129a950" in namespace "pods-665" to be "Succeeded or Failed"
Aug 21 06:32:11.509: INFO: Pod "client-envvars-78dc93ae-73f6-4cae-9ac6-830df129a950": Phase="Pending", Reason="", readiness=false. Elapsed: 44.90423ms
Aug 21 06:32:13.517: INFO: Pod "client-envvars-78dc93ae-73f6-4cae-9ac6-830df129a950": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053408619s
Aug 21 06:32:15.725: INFO: Pod "client-envvars-78dc93ae-73f6-4cae-9ac6-830df129a950": Phase="Running", Reason="", readiness=true. Elapsed: 4.260638525s
Aug 21 06:32:17.732: INFO: Pod "client-envvars-78dc93ae-73f6-4cae-9ac6-830df129a950": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.268083087s
STEP: Saw pod success
Aug 21 06:32:17.733: INFO: Pod "client-envvars-78dc93ae-73f6-4cae-9ac6-830df129a950" satisfied condition "Succeeded or Failed"
Aug 21 06:32:17.738: INFO: Trying to get logs from node kali-worker pod client-envvars-78dc93ae-73f6-4cae-9ac6-830df129a950 container env3cont: 
STEP: delete the pod
Aug 21 06:32:17.764: INFO: Waiting for pod client-envvars-78dc93ae-73f6-4cae-9ac6-830df129a950 to disappear
Aug 21 06:32:17.768: INFO: Pod client-envvars-78dc93ae-73f6-4cae-9ac6-830df129a950 no longer exists
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:32:17.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-665" for this suite.

• [SLOW TEST:10.456 seconds]
[k8s.io] Pods
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":102,"skipped":1630,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:32:17.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override command
Aug 21 06:32:17.927: INFO: Waiting up to 5m0s for pod "client-containers-151ef604-84b3-4f89-a64c-6047c6cbd308" in namespace "containers-1278" to be "Succeeded or Failed"
Aug 21 06:32:17.946: INFO: Pod "client-containers-151ef604-84b3-4f89-a64c-6047c6cbd308": Phase="Pending", Reason="", readiness=false. Elapsed: 18.393517ms
Aug 21 06:32:19.983: INFO: Pod "client-containers-151ef604-84b3-4f89-a64c-6047c6cbd308": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05575018s
Aug 21 06:32:21.992: INFO: Pod "client-containers-151ef604-84b3-4f89-a64c-6047c6cbd308": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064113699s
STEP: Saw pod success
Aug 21 06:32:21.992: INFO: Pod "client-containers-151ef604-84b3-4f89-a64c-6047c6cbd308" satisfied condition "Succeeded or Failed"
Aug 21 06:32:21.996: INFO: Trying to get logs from node kali-worker pod client-containers-151ef604-84b3-4f89-a64c-6047c6cbd308 container test-container: 
STEP: delete the pod
Aug 21 06:32:22.072: INFO: Waiting for pod client-containers-151ef604-84b3-4f89-a64c-6047c6cbd308 to disappear
Aug 21 06:32:22.083: INFO: Pod client-containers-151ef604-84b3-4f89-a64c-6047c6cbd308 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:32:22.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1278" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":103,"skipped":1688,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:32:22.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-fec66d4b-1886-47f3-b1c7-c9e0af5b97e9
STEP: Creating a pod to test consume configMaps
Aug 21 06:32:22.241: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b83f36ab-b038-4b94-95c2-7273ca418723" in namespace "projected-9247" to be "Succeeded or Failed"
Aug 21 06:32:22.284: INFO: Pod "pod-projected-configmaps-b83f36ab-b038-4b94-95c2-7273ca418723": Phase="Pending", Reason="", readiness=false. Elapsed: 42.75104ms
Aug 21 06:32:24.354: INFO: Pod "pod-projected-configmaps-b83f36ab-b038-4b94-95c2-7273ca418723": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112827962s
Aug 21 06:32:26.362: INFO: Pod "pod-projected-configmaps-b83f36ab-b038-4b94-95c2-7273ca418723": Phase="Running", Reason="", readiness=true. Elapsed: 4.12053089s
Aug 21 06:32:28.369: INFO: Pod "pod-projected-configmaps-b83f36ab-b038-4b94-95c2-7273ca418723": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.12741718s
STEP: Saw pod success
Aug 21 06:32:28.369: INFO: Pod "pod-projected-configmaps-b83f36ab-b038-4b94-95c2-7273ca418723" satisfied condition "Succeeded or Failed"
Aug 21 06:32:28.374: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-b83f36ab-b038-4b94-95c2-7273ca418723 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 21 06:32:28.425: INFO: Waiting for pod pod-projected-configmaps-b83f36ab-b038-4b94-95c2-7273ca418723 to disappear
Aug 21 06:32:28.520: INFO: Pod pod-projected-configmaps-b83f36ab-b038-4b94-95c2-7273ca418723 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:32:28.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9247" for this suite.

• [SLOW TEST:6.417 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":104,"skipped":1703,"failed":0}
SSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:32:28.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Aug 21 06:32:28.595: INFO: PodSpec: initContainers in spec.initContainers
Aug 21 06:33:15.726: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-fa0193ea-0881-49eb-ad5f-b9000619ec05", GenerateName:"", Namespace:"init-container-6701", SelfLink:"/api/v1/namespaces/init-container-6701/pods/pod-init-fa0193ea-0881-49eb-ad5f-b9000619ec05", UID:"ecfdba64-e388-4a42-b0a4-48f934ed8228", ResourceVersion:"2021790", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63733588348, loc:(*time.Location)(0x62a11f0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"594272506"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0x8fe7f40), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x97a5820)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0x8fe7f60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x97a5830)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-swcqj", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0x8fe7f80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-swcqj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-swcqj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-swcqj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xab02978), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kali-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x77c4b00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xab02a00)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xab02a20)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xab02a28), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xab02a2c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588348, loc:(*time.Location)(0x62a11f0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588348, loc:(*time.Location)(0x62a11f0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588348, loc:(*time.Location)(0x62a11f0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588348, loc:(*time.Location)(0x62a11f0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.16", PodIP:"10.244.2.177", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.177"}}, StartTime:(*v1.Time)(0x8018020), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0x8018040), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x67bbae0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://0ae65b8a9d43bf56753aa587729b13d3e20c4d8e1a92a042566561dc46c630ce", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x97a5850), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x97a5840), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xab02aaf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:33:15.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6701" for this suite.

• [SLOW TEST:47.404 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":105,"skipped":1708,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:33:15.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Aug 21 06:33:21.169: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:33:21.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-6311" for this suite.

• [SLOW TEST:5.657 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":106,"skipped":1723,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:33:21.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support rollover [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 06:33:21.878: INFO: Pod name rollover-pod: Found 0 pods out of 1
Aug 21 06:33:26.967: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 21 06:33:26.967: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Aug 21 06:33:28.975: INFO: Creating deployment "test-rollover-deployment"
Aug 21 06:33:29.002: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Aug 21 06:33:31.016: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Aug 21 06:33:31.026: INFO: Ensure that both replica sets have 1 created replica
Aug 21 06:33:31.036: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Aug 21 06:33:31.053: INFO: Updating deployment test-rollover-deployment
Aug 21 06:33:31.053: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Aug 21 06:33:33.073: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Aug 21 06:33:33.086: INFO: Make sure deployment "test-rollover-deployment" is complete
Aug 21 06:33:33.098: INFO: all replica sets need to contain the pod-template-hash label
Aug 21 06:33:33.098: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588409, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588409, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588411, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588409, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 06:33:35.114: INFO: all replica sets need to contain the pod-template-hash label
Aug 21 06:33:35.115: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588409, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588409, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588414, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588409, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 06:33:37.114: INFO: all replica sets need to contain the pod-template-hash label
Aug 21 06:33:37.114: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588409, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588409, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588414, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588409, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 06:33:39.115: INFO: all replica sets need to contain the pod-template-hash label
Aug 21 06:33:39.116: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588409, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588409, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588414, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588409, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 06:33:41.116: INFO: all replica sets need to contain the pod-template-hash label
Aug 21 06:33:41.117: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588409, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588409, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588414, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588409, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 06:33:43.115: INFO: all replica sets need to contain the pod-template-hash label
Aug 21 06:33:43.116: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588409, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588409, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588414, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588409, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 06:33:45.117: INFO: 
Aug 21 06:33:45.117: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Aug 21 06:33:45.136: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-8613 /apis/apps/v1/namespaces/deployment-8613/deployments/test-rollover-deployment 3510d403-4640-4d19-9f44-a75ccfdf3583 2022010 2 2020-08-21 06:33:28 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-08-21 06:33:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-21 06:33:44 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x9e59148  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-21 06:33:29 +0000 UTC,LastTransitionTime:2020-08-21 06:33:29 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-84f7f6f64b" has successfully progressed.,LastUpdateTime:2020-08-21 06:33:44 +0000 UTC,LastTransitionTime:2020-08-21 06:33:29 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Aug 21 06:33:45.146: INFO: New ReplicaSet "test-rollover-deployment-84f7f6f64b" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-84f7f6f64b  deployment-8613 /apis/apps/v1/namespaces/deployment-8613/replicasets/test-rollover-deployment-84f7f6f64b bde1de45-8ff4-44e8-9345-8a8cf04a1359 2021999 2 2020-08-21 06:33:31 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 3510d403-4640-4d19-9f44-a75ccfdf3583 0x9e59747 0x9e59748}] []  [{kube-controller-manager Update apps/v1 2020-08-21 06:33:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 53 49 48 100 52 48 51 45 52 54 52 48 45 52 100 49 57 45 57 102 52 52 45 97 55 53 99 99 102 100 102 51 53 56 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 84f7f6f64b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x9e597d8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 21 06:33:45.147: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Aug 21 06:33:45.149: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-8613 /apis/apps/v1/namespaces/deployment-8613/replicasets/test-rollover-controller e979ece0-6a56-43d4-8353-e9a9cfab5777 2022009 2 2020-08-21 06:33:21 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 3510d403-4640-4d19-9f44-a75ccfdf3583 0x9e59537 0x9e59538}] []  [{e2e.test Update apps/v1 2020-08-21 06:33:21 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-21 06:33:44 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 53 49 48 100 52 48 51 45 52 54 52 48 45 52 100 49 57 45 57 102 52 52 45 97 55 53 99 99 102 100 102 51 53 56 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x9e595d8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 21 06:33:45.151: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5  deployment-8613 /apis/apps/v1/namespaces/deployment-8613/replicasets/test-rollover-deployment-5686c4cfd5 2898e295-ca24-469f-a472-26e8026b5671 2021947 2 2020-08-21 06:33:29 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 3510d403-4640-4d19-9f44-a75ccfdf3583 0x9e59647 0x9e59648}] []  [{kube-controller-manager Update apps/v1 2020-08-21 06:33:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 53 49 48 100 52 48 51 45 52 54 52 48 45 52 100 49 57 45 57 102 52 52 45 97 55 53 99 99 102 100 102 51 53 56 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 114 101 100 105 115 45 115 108 97 118 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x9e596d8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 21 06:33:45.162: INFO: Pod "test-rollover-deployment-84f7f6f64b-stmhg" is available:
&Pod{ObjectMeta:{test-rollover-deployment-84f7f6f64b-stmhg test-rollover-deployment-84f7f6f64b- deployment-8613 /api/v1/namespaces/deployment-8613/pods/test-rollover-deployment-84f7f6f64b-stmhg d6366420-d883-4513-aae9-3388549de4ae 2021965 0 2020-08-21 06:33:31 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [{apps/v1 ReplicaSet test-rollover-deployment-84f7f6f64b bde1de45-8ff4-44e8-9345-8a8cf04a1359 0x9e59d57 0x9e59d58}] []  [{kube-controller-manager Update v1 2020-08-21 06:33:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 100 101 49 100 101 52 53 45 56 102 102 52 45 52 52 101 56 45 57 51 52 53 45 56 97 56 99 102 48 52 97 49 51 53 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:33:34 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 55 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-brq57,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-brq57,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-brq57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:33:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:33:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:33:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:33:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.179,StartTime:2020-08-21 06:33:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 06:33:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://49127e71836bb2f2f7c11a3f01f9d461e445a99b86dd6f20bb2f48adbf3de1fa,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.179,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:33:45.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8613" for this suite.

• [SLOW TEST:23.569 seconds]
[sig-apps] Deployment
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":107,"skipped":1779,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:33:45.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 06:33:45.386: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:33:46.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6120" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":275,"completed":108,"skipped":1781,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:33:46.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-e6f7271b-e5b5-4394-8e1a-7e8b9bd009e9
STEP: Creating a pod to test consume configMaps
Aug 21 06:33:46.728: INFO: Waiting up to 5m0s for pod "pod-configmaps-6ecf23ab-6210-42cf-aef5-f9d574b80bc4" in namespace "configmap-5846" to be "Succeeded or Failed"
Aug 21 06:33:46.743: INFO: Pod "pod-configmaps-6ecf23ab-6210-42cf-aef5-f9d574b80bc4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.303842ms
Aug 21 06:33:48.966: INFO: Pod "pod-configmaps-6ecf23ab-6210-42cf-aef5-f9d574b80bc4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.237829892s
Aug 21 06:33:50.974: INFO: Pod "pod-configmaps-6ecf23ab-6210-42cf-aef5-f9d574b80bc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.245281356s
STEP: Saw pod success
Aug 21 06:33:50.974: INFO: Pod "pod-configmaps-6ecf23ab-6210-42cf-aef5-f9d574b80bc4" satisfied condition "Succeeded or Failed"
Aug 21 06:33:51.086: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-6ecf23ab-6210-42cf-aef5-f9d574b80bc4 container configmap-volume-test: 
STEP: delete the pod
Aug 21 06:33:51.320: INFO: Waiting for pod pod-configmaps-6ecf23ab-6210-42cf-aef5-f9d574b80bc4 to disappear
Aug 21 06:33:51.328: INFO: Pod pod-configmaps-6ecf23ab-6210-42cf-aef5-f9d574b80bc4 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:33:51.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5846" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":109,"skipped":1790,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:33:51.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 21 06:33:51.540: INFO: Waiting up to 5m0s for pod "pod-293a6367-099d-4c36-94ca-f5a3c14dfa6a" in namespace "emptydir-9816" to be "Succeeded or Failed"
Aug 21 06:33:51.606: INFO: Pod "pod-293a6367-099d-4c36-94ca-f5a3c14dfa6a": Phase="Pending", Reason="", readiness=false. Elapsed: 66.284738ms
Aug 21 06:33:53.613: INFO: Pod "pod-293a6367-099d-4c36-94ca-f5a3c14dfa6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073480861s
Aug 21 06:33:55.620: INFO: Pod "pod-293a6367-099d-4c36-94ca-f5a3c14dfa6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080445262s
STEP: Saw pod success
Aug 21 06:33:55.621: INFO: Pod "pod-293a6367-099d-4c36-94ca-f5a3c14dfa6a" satisfied condition "Succeeded or Failed"
Aug 21 06:33:55.625: INFO: Trying to get logs from node kali-worker2 pod pod-293a6367-099d-4c36-94ca-f5a3c14dfa6a container test-container: 
STEP: delete the pod
Aug 21 06:33:55.915: INFO: Waiting for pod pod-293a6367-099d-4c36-94ca-f5a3c14dfa6a to disappear
Aug 21 06:33:55.937: INFO: Pod pod-293a6367-099d-4c36-94ca-f5a3c14dfa6a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:33:55.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9816" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":110,"skipped":1803,"failed":0}
SSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:33:55.948: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 06:33:56.013: INFO: Creating deployment "test-recreate-deployment"
Aug 21 06:33:56.091: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Aug 21 06:33:56.178: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Aug 21 06:33:58.191: INFO: Waiting deployment "test-recreate-deployment" to complete
Aug 21 06:33:58.196: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588436, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588436, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588436, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588436, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 06:34:00.204: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Aug 21 06:34:00.215: INFO: Updating deployment test-recreate-deployment
Aug 21 06:34:00.215: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Aug 21 06:34:00.901: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-3129 /apis/apps/v1/namespaces/deployment-3129/deployments/test-recreate-deployment 1665bb6d-50ef-4d00-bdd4-9c1676e8ed80 2022197 2 2020-08-21 06:33:56 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-08-21 06:34:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-21 06:34:00 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x9cd4298  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-21 06:34:00 +0000 UTC,LastTransitionTime:2020-08-21 06:34:00 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-08-21 06:34:00 +0000 UTC,LastTransitionTime:2020-08-21 06:33:56 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Aug 21 06:34:00.913: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7  deployment-3129 /apis/apps/v1/namespaces/deployment-3129/replicasets/test-recreate-deployment-d5667d9c7 e4227a83-440f-4fb4-91fc-8b0cc0e0ae61 2022194 1 2020-08-21 06:34:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 1665bb6d-50ef-4d00-bdd4-9c1676e8ed80 0x9cd4790 0x9cd4791}] []  [{kube-controller-manager Update apps/v1 2020-08-21 06:34:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 54 54 53 98 98 54 100 45 53 48 101 102 45 52 100 48 48 45 98 100 100 52 45 57 99 49 54 55 54 101 56 101 100 56 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x9cd4808  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 21 06:34:00.913: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Aug 21 06:34:00.915: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-74d98b5f7c  deployment-3129 /apis/apps/v1/namespaces/deployment-3129/replicasets/test-recreate-deployment-74d98b5f7c f8f6b4cb-d92b-41c4-aeb1-41b8fed9ca1e 2022185 2 2020-08-21 06:33:56 +0000 UTC   map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 1665bb6d-50ef-4d00-bdd4-9c1676e8ed80 0x9cd4687 0x9cd4688}] []  [{kube-controller-manager Update apps/v1 2020-08-21 06:34:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 54 54 53 98 98 54 100 45 53 48 101 102 45 52 100 48 48 45 98 100 100 52 45 57 99 49 54 55 54 101 56 101 100 56 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 74d98b5f7c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x9cd4728  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 21 06:34:00.924: INFO: Pod "test-recreate-deployment-d5667d9c7-mcj94" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-mcj94 test-recreate-deployment-d5667d9c7- deployment-3129 /api/v1/namespaces/deployment-3129/pods/test-recreate-deployment-d5667d9c7-mcj94 7ec1ff4f-da93-42eb-b0a1-17038d6531f1 2022198 0 2020-08-21 06:34:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 e4227a83-440f-4fb4-91fc-8b0cc0e0ae61 0x9cd4ce0 0x9cd4ce1}] []  [{kube-controller-manager Update v1 2020-08-21 06:34:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 52 50 50 55 97 56 51 45 52 52 48 102 45 52 102 98 52 45 57 49 102 99 45 56 98 48 99 99 48 101 48 97 101 54 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:34:00 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fbdmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fbdmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fbdmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:34:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:34:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:34:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:34:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-21 06:34:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:34:00.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3129" for this suite.

• [SLOW TEST:5.025 seconds]
[sig-apps] Deployment
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":111,"skipped":1810,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:34:00.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check is all data is printed  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 06:34:01.419: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config version'
Aug 21 06:34:02.587: INFO: stderr: ""
Aug 21 06:34:02.587: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.8\", GitCommit:\"9f2892aab98fe339f3bd70e3c470144299398ace\", GitTreeState:\"clean\", BuildDate:\"2020-08-13T16:12:48Z\", GoVersion:\"go1.13.15\", Compiler:\"gc\", Platform:\"linux/arm\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.8\", GitCommit:\"9f2892aab98fe339f3bd70e3c470144299398ace\", GitTreeState:\"clean\", BuildDate:\"2020-08-14T21:13:38Z\", GoVersion:\"go1.13.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:34:02.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9249" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":275,"completed":112,"skipped":1813,"failed":0}
SS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:34:02.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 21 06:34:06.902: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:34:06.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2754" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":113,"skipped":1815,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:34:06.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Aug 21 06:34:08.157: INFO: Pod name wrapped-volume-race-4b893422-964a-4a76-a763-48badcc1a5f0: Found 0 pods out of 5
Aug 21 06:34:13.177: INFO: Pod name wrapped-volume-race-4b893422-964a-4a76-a763-48badcc1a5f0: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-4b893422-964a-4a76-a763-48badcc1a5f0 in namespace emptydir-wrapper-4452, will wait for the garbage collector to delete the pods
Aug 21 06:34:27.313: INFO: Deleting ReplicationController wrapped-volume-race-4b893422-964a-4a76-a763-48badcc1a5f0 took: 30.091573ms
Aug 21 06:34:27.414: INFO: Terminating ReplicationController wrapped-volume-race-4b893422-964a-4a76-a763-48badcc1a5f0 pods took: 100.792072ms
STEP: Creating RC which spawns configmap-volume pods
Aug 21 06:34:39.363: INFO: Pod name wrapped-volume-race-ad8f5c07-7781-4a29-b2c3-83cbb866e5b9: Found 0 pods out of 5
Aug 21 06:34:44.385: INFO: Pod name wrapped-volume-race-ad8f5c07-7781-4a29-b2c3-83cbb866e5b9: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-ad8f5c07-7781-4a29-b2c3-83cbb866e5b9 in namespace emptydir-wrapper-4452, will wait for the garbage collector to delete the pods
Aug 21 06:34:58.565: INFO: Deleting ReplicationController wrapped-volume-race-ad8f5c07-7781-4a29-b2c3-83cbb866e5b9 took: 32.570956ms
Aug 21 06:34:58.868: INFO: Terminating ReplicationController wrapped-volume-race-ad8f5c07-7781-4a29-b2c3-83cbb866e5b9 pods took: 302.551499ms
STEP: Creating RC which spawns configmap-volume pods
Aug 21 06:35:09.258: INFO: Pod name wrapped-volume-race-80c58155-839d-438f-b63c-5a6a576d0f3b: Found 1 pods out of 5
Aug 21 06:35:14.280: INFO: Pod name wrapped-volume-race-80c58155-839d-438f-b63c-5a6a576d0f3b: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-80c58155-839d-438f-b63c-5a6a576d0f3b in namespace emptydir-wrapper-4452, will wait for the garbage collector to delete the pods
Aug 21 06:35:30.407: INFO: Deleting ReplicationController wrapped-volume-race-80c58155-839d-438f-b63c-5a6a576d0f3b took: 10.171578ms
Aug 21 06:35:30.807: INFO: Terminating ReplicationController wrapped-volume-race-80c58155-839d-438f-b63c-5a6a576d0f3b pods took: 400.827666ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:35:39.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-4452" for this suite.

• [SLOW TEST:92.956 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":114,"skipped":1822,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:35:39.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
[It] should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:35:44.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7032" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":115,"skipped":1840,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:35:44.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-37ca6231-db82-4dba-865e-c8557cfdbb26
STEP: Creating a pod to test consume configMaps
Aug 21 06:35:44.173: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ff50ce4b-ebf9-4d5d-8049-6732df664794" in namespace "projected-2501" to be "Succeeded or Failed"
Aug 21 06:35:44.204: INFO: Pod "pod-projected-configmaps-ff50ce4b-ebf9-4d5d-8049-6732df664794": Phase="Pending", Reason="", readiness=false. Elapsed: 31.10532ms
Aug 21 06:35:46.226: INFO: Pod "pod-projected-configmaps-ff50ce4b-ebf9-4d5d-8049-6732df664794": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052830964s
Aug 21 06:35:48.232: INFO: Pod "pod-projected-configmaps-ff50ce4b-ebf9-4d5d-8049-6732df664794": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058988432s
STEP: Saw pod success
Aug 21 06:35:48.232: INFO: Pod "pod-projected-configmaps-ff50ce4b-ebf9-4d5d-8049-6732df664794" satisfied condition "Succeeded or Failed"
Aug 21 06:35:48.238: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-ff50ce4b-ebf9-4d5d-8049-6732df664794 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 21 06:35:48.620: INFO: Waiting for pod pod-projected-configmaps-ff50ce4b-ebf9-4d5d-8049-6732df664794 to disappear
Aug 21 06:35:48.652: INFO: Pod pod-projected-configmaps-ff50ce4b-ebf9-4d5d-8049-6732df664794 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:35:48.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2501" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":116,"skipped":1846,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:35:48.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-map-03cf1b72-3916-4ce1-b513-3b37e192ffbe
STEP: Creating a pod to test consume secrets
Aug 21 06:35:48.897: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c0298f52-fc6f-4bc6-9397-bff3e8e9546a" in namespace "projected-213" to be "Succeeded or Failed"
Aug 21 06:35:48.939: INFO: Pod "pod-projected-secrets-c0298f52-fc6f-4bc6-9397-bff3e8e9546a": Phase="Pending", Reason="", readiness=false. Elapsed: 41.798532ms
Aug 21 06:35:50.946: INFO: Pod "pod-projected-secrets-c0298f52-fc6f-4bc6-9397-bff3e8e9546a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048917052s
Aug 21 06:35:52.953: INFO: Pod "pod-projected-secrets-c0298f52-fc6f-4bc6-9397-bff3e8e9546a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05624063s
STEP: Saw pod success
Aug 21 06:35:52.953: INFO: Pod "pod-projected-secrets-c0298f52-fc6f-4bc6-9397-bff3e8e9546a" satisfied condition "Succeeded or Failed"
Aug 21 06:35:52.958: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-c0298f52-fc6f-4bc6-9397-bff3e8e9546a container projected-secret-volume-test: 
STEP: delete the pod
Aug 21 06:35:53.010: INFO: Waiting for pod pod-projected-secrets-c0298f52-fc6f-4bc6-9397-bff3e8e9546a to disappear
Aug 21 06:35:53.020: INFO: Pod pod-projected-secrets-c0298f52-fc6f-4bc6-9397-bff3e8e9546a no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:35:53.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-213" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":117,"skipped":1861,"failed":0}
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:35:53.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name projected-secret-test-d61fbd75-c919-4472-8bf6-2456ccd42e94
STEP: Creating a pod to test consume secrets
Aug 21 06:35:53.125: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5484c1d4-feb5-4b56-8d41-0e94996dfe39" in namespace "projected-8766" to be "Succeeded or Failed"
Aug 21 06:35:53.144: INFO: Pod "pod-projected-secrets-5484c1d4-feb5-4b56-8d41-0e94996dfe39": Phase="Pending", Reason="", readiness=false. Elapsed: 19.036053ms
Aug 21 06:35:55.158: INFO: Pod "pod-projected-secrets-5484c1d4-feb5-4b56-8d41-0e94996dfe39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033192173s
Aug 21 06:35:57.165: INFO: Pod "pod-projected-secrets-5484c1d4-feb5-4b56-8d41-0e94996dfe39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040445634s
STEP: Saw pod success
Aug 21 06:35:57.166: INFO: Pod "pod-projected-secrets-5484c1d4-feb5-4b56-8d41-0e94996dfe39" satisfied condition "Succeeded or Failed"
Aug 21 06:35:57.170: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-5484c1d4-feb5-4b56-8d41-0e94996dfe39 container secret-volume-test: 
STEP: delete the pod
Aug 21 06:35:57.205: INFO: Waiting for pod pod-projected-secrets-5484c1d4-feb5-4b56-8d41-0e94996dfe39 to disappear
Aug 21 06:35:57.213: INFO: Pod pod-projected-secrets-5484c1d4-feb5-4b56-8d41-0e94996dfe39 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:35:57.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8766" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":118,"skipped":1865,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:35:57.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-91b22680-75c8-43e4-95a5-5179108ea7ec
STEP: Creating a pod to test consume secrets
Aug 21 06:35:57.360: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c635be68-8bcd-45bc-81da-151e118ec349" in namespace "projected-3761" to be "Succeeded or Failed"
Aug 21 06:35:57.428: INFO: Pod "pod-projected-secrets-c635be68-8bcd-45bc-81da-151e118ec349": Phase="Pending", Reason="", readiness=false. Elapsed: 67.709075ms
Aug 21 06:35:59.434: INFO: Pod "pod-projected-secrets-c635be68-8bcd-45bc-81da-151e118ec349": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073685034s
Aug 21 06:36:01.442: INFO: Pod "pod-projected-secrets-c635be68-8bcd-45bc-81da-151e118ec349": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081031763s
STEP: Saw pod success
Aug 21 06:36:01.442: INFO: Pod "pod-projected-secrets-c635be68-8bcd-45bc-81da-151e118ec349" satisfied condition "Succeeded or Failed"
Aug 21 06:36:01.447: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-c635be68-8bcd-45bc-81da-151e118ec349 container projected-secret-volume-test: 
STEP: delete the pod
Aug 21 06:36:01.530: INFO: Waiting for pod pod-projected-secrets-c635be68-8bcd-45bc-81da-151e118ec349 to disappear
Aug 21 06:36:01.614: INFO: Pod pod-projected-secrets-c635be68-8bcd-45bc-81da-151e118ec349 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:36:01.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3761" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":119,"skipped":1878,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:36:01.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Update Demo
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271
[It] should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a replication controller
Aug 21 06:36:01.688: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5098'
Aug 21 06:36:05.710: INFO: stderr: ""
Aug 21 06:36:05.710: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 21 06:36:05.711: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5098'
Aug 21 06:36:06.865: INFO: stderr: ""
Aug 21 06:36:06.865: INFO: stdout: "update-demo-nautilus-j9tkz update-demo-nautilus-t4t9n "
Aug 21 06:36:06.865: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j9tkz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5098'
Aug 21 06:36:08.033: INFO: stderr: ""
Aug 21 06:36:08.033: INFO: stdout: ""
Aug 21 06:36:08.033: INFO: update-demo-nautilus-j9tkz is created but not running
Aug 21 06:36:13.034: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5098'
Aug 21 06:36:14.163: INFO: stderr: ""
Aug 21 06:36:14.164: INFO: stdout: "update-demo-nautilus-j9tkz update-demo-nautilus-t4t9n "
Aug 21 06:36:14.164: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j9tkz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5098'
Aug 21 06:36:15.282: INFO: stderr: ""
Aug 21 06:36:15.282: INFO: stdout: "true"
Aug 21 06:36:15.283: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j9tkz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5098'
Aug 21 06:36:16.393: INFO: stderr: ""
Aug 21 06:36:16.393: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 21 06:36:16.393: INFO: validating pod update-demo-nautilus-j9tkz
Aug 21 06:36:16.407: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 21 06:36:16.407: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 21 06:36:16.407: INFO: update-demo-nautilus-j9tkz is verified up and running
Aug 21 06:36:16.407: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t4t9n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5098'
Aug 21 06:36:17.491: INFO: stderr: ""
Aug 21 06:36:17.491: INFO: stdout: "true"
Aug 21 06:36:17.491: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t4t9n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5098'
Aug 21 06:36:18.614: INFO: stderr: ""
Aug 21 06:36:18.614: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 21 06:36:18.614: INFO: validating pod update-demo-nautilus-t4t9n
Aug 21 06:36:18.620: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 21 06:36:18.620: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 21 06:36:18.620: INFO: update-demo-nautilus-t4t9n is verified up and running
STEP: using delete to clean up resources
Aug 21 06:36:18.621: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5098'
Aug 21 06:36:19.715: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 21 06:36:19.716: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 21 06:36:19.716: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5098'
Aug 21 06:36:20.854: INFO: stderr: "No resources found in kubectl-5098 namespace.\n"
Aug 21 06:36:20.855: INFO: stdout: ""
Aug 21 06:36:20.855: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5098 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 21 06:36:22.011: INFO: stderr: ""
Aug 21 06:36:22.011: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:36:22.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5098" for this suite.

• [SLOW TEST:20.394 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269
    should create and stop a replication controller  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":275,"completed":120,"skipped":1887,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:36:22.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-edec0b0c-f098-4e81-a78c-4d1e1b62593b
STEP: Creating a pod to test consume configMaps
Aug 21 06:36:22.121: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7a664e8b-2460-4c56-9295-ec98a6524fae" in namespace "projected-9071" to be "Succeeded or Failed"
Aug 21 06:36:22.155: INFO: Pod "pod-projected-configmaps-7a664e8b-2460-4c56-9295-ec98a6524fae": Phase="Pending", Reason="", readiness=false. Elapsed: 33.525428ms
Aug 21 06:36:24.162: INFO: Pod "pod-projected-configmaps-7a664e8b-2460-4c56-9295-ec98a6524fae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040810162s
Aug 21 06:36:26.170: INFO: Pod "pod-projected-configmaps-7a664e8b-2460-4c56-9295-ec98a6524fae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048876329s
STEP: Saw pod success
Aug 21 06:36:26.170: INFO: Pod "pod-projected-configmaps-7a664e8b-2460-4c56-9295-ec98a6524fae" satisfied condition "Succeeded or Failed"
Aug 21 06:36:26.176: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-7a664e8b-2460-4c56-9295-ec98a6524fae container projected-configmap-volume-test: 
STEP: delete the pod
Aug 21 06:36:26.217: INFO: Waiting for pod pod-projected-configmaps-7a664e8b-2460-4c56-9295-ec98a6524fae to disappear
Aug 21 06:36:26.248: INFO: Pod pod-projected-configmaps-7a664e8b-2460-4c56-9295-ec98a6524fae no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:36:26.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9071" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":121,"skipped":1966,"failed":0}
SSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:36:26.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-vql68 in namespace proxy-3606
I0821 06:36:26.399918      10 runners.go:190] Created replication controller with name: proxy-service-vql68, namespace: proxy-3606, replica count: 1
I0821 06:36:27.451759      10 runners.go:190] proxy-service-vql68 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0821 06:36:28.452887      10 runners.go:190] proxy-service-vql68 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0821 06:36:29.453870      10 runners.go:190] proxy-service-vql68 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0821 06:36:30.454685      10 runners.go:190] proxy-service-vql68 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0821 06:36:31.455330      10 runners.go:190] proxy-service-vql68 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0821 06:36:32.456052      10 runners.go:190] proxy-service-vql68 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 21 06:36:32.462: INFO: setup took 6.155829273s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Aug 21 06:36:32.470: INFO: (0) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:162/proxy/: bar (200; 6.784901ms)
Aug 21 06:36:32.470: INFO: (0) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6/proxy/: test (200; 6.71578ms)
Aug 21 06:36:32.470: INFO: (0) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:162/proxy/: bar (200; 7.06606ms)
Aug 21 06:36:32.478: INFO: (0) /api/v1/namespaces/proxy-3606/services/https:proxy-service-vql68:tlsportname1/proxy/: tls baz (200; 14.242443ms)
Aug 21 06:36:32.478: INFO: (0) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:1080/proxy/: test<... (200; 14.769731ms)
Aug 21 06:36:32.479: INFO: (0) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:443/proxy/: ... (200; 15.361701ms)
Aug 21 06:36:32.479: INFO: (0) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:160/proxy/: foo (200; 16.157515ms)
Aug 21 06:36:32.480: INFO: (0) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:160/proxy/: foo (200; 15.874091ms)
Aug 21 06:36:32.480: INFO: (0) /api/v1/namespaces/proxy-3606/services/http:proxy-service-vql68:portname1/proxy/: foo (200; 16.598022ms)
Aug 21 06:36:32.480: INFO: (0) /api/v1/namespaces/proxy-3606/services/proxy-service-vql68:portname1/proxy/: foo (200; 16.754595ms)
Aug 21 06:36:32.482: INFO: (0) /api/v1/namespaces/proxy-3606/services/https:proxy-service-vql68:tlsportname2/proxy/: tls qux (200; 18.764691ms)
Aug 21 06:36:32.482: INFO: (0) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:462/proxy/: tls qux (200; 18.336881ms)
Aug 21 06:36:32.491: INFO: (1) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:1080/proxy/: test<... (200; 8.071724ms)
Aug 21 06:36:32.491: INFO: (1) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:160/proxy/: foo (200; 8.407929ms)
Aug 21 06:36:32.492: INFO: (1) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:443/proxy/: ... (200; 9.926739ms)
Aug 21 06:36:32.493: INFO: (1) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:462/proxy/: tls qux (200; 9.760421ms)
Aug 21 06:36:32.493: INFO: (1) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:160/proxy/: foo (200; 10.25044ms)
Aug 21 06:36:32.493: INFO: (1) /api/v1/namespaces/proxy-3606/services/http:proxy-service-vql68:portname1/proxy/: foo (200; 9.959798ms)
Aug 21 06:36:32.493: INFO: (1) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6/proxy/: test (200; 10.312297ms)
Aug 21 06:36:32.493: INFO: (1) /api/v1/namespaces/proxy-3606/services/https:proxy-service-vql68:tlsportname2/proxy/: tls qux (200; 10.610447ms)
Aug 21 06:36:32.493: INFO: (1) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:162/proxy/: bar (200; 10.783167ms)
Aug 21 06:36:32.494: INFO: (1) /api/v1/namespaces/proxy-3606/services/proxy-service-vql68:portname2/proxy/: bar (200; 11.259274ms)
Aug 21 06:36:32.494: INFO: (1) /api/v1/namespaces/proxy-3606/services/http:proxy-service-vql68:portname2/proxy/: bar (200; 11.086429ms)
Aug 21 06:36:32.494: INFO: (1) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:460/proxy/: tls baz (200; 10.88807ms)
Aug 21 06:36:32.494: INFO: (1) /api/v1/namespaces/proxy-3606/services/https:proxy-service-vql68:tlsportname1/proxy/: tls baz (200; 10.908601ms)
Aug 21 06:36:32.499: INFO: (2) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:162/proxy/: bar (200; 4.498219ms)
Aug 21 06:36:32.499: INFO: (2) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:462/proxy/: tls qux (200; 5.357175ms)
Aug 21 06:36:32.500: INFO: (2) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:162/proxy/: bar (200; 4.991515ms)
Aug 21 06:36:32.500: INFO: (2) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:160/proxy/: foo (200; 6.397169ms)
Aug 21 06:36:32.500: INFO: (2) /api/v1/namespaces/proxy-3606/services/proxy-service-vql68:portname2/proxy/: bar (200; 6.274918ms)
Aug 21 06:36:32.501: INFO: (2) /api/v1/namespaces/proxy-3606/services/http:proxy-service-vql68:portname2/proxy/: bar (200; 7.081561ms)
Aug 21 06:36:32.502: INFO: (2) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:1080/proxy/: ... (200; 7.837234ms)
Aug 21 06:36:32.502: INFO: (2) /api/v1/namespaces/proxy-3606/services/http:proxy-service-vql68:portname1/proxy/: foo (200; 8.134729ms)
Aug 21 06:36:32.503: INFO: (2) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:460/proxy/: tls baz (200; 8.108616ms)
Aug 21 06:36:32.503: INFO: (2) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:443/proxy/: test (200; 10.695951ms)
Aug 21 06:36:32.506: INFO: (2) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:1080/proxy/: test<... (200; 10.913311ms)
Aug 21 06:36:32.509: INFO: (3) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:162/proxy/: bar (200; 3.69786ms)
Aug 21 06:36:32.511: INFO: (3) /api/v1/namespaces/proxy-3606/services/proxy-service-vql68:portname1/proxy/: foo (200; 5.456184ms)
Aug 21 06:36:32.512: INFO: (3) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:443/proxy/: test (200; 6.577459ms)
Aug 21 06:36:32.513: INFO: (3) /api/v1/namespaces/proxy-3606/services/proxy-service-vql68:portname2/proxy/: bar (200; 6.868828ms)
Aug 21 06:36:32.513: INFO: (3) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:1080/proxy/: test<... (200; 6.752462ms)
Aug 21 06:36:32.513: INFO: (3) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:160/proxy/: foo (200; 6.931876ms)
Aug 21 06:36:32.513: INFO: (3) /api/v1/namespaces/proxy-3606/services/https:proxy-service-vql68:tlsportname1/proxy/: tls baz (200; 7.16418ms)
Aug 21 06:36:32.513: INFO: (3) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:462/proxy/: tls qux (200; 7.392718ms)
Aug 21 06:36:32.513: INFO: (3) /api/v1/namespaces/proxy-3606/services/https:proxy-service-vql68:tlsportname2/proxy/: tls qux (200; 7.467543ms)
Aug 21 06:36:32.513: INFO: (3) /api/v1/namespaces/proxy-3606/services/http:proxy-service-vql68:portname1/proxy/: foo (200; 7.246419ms)
Aug 21 06:36:32.514: INFO: (3) /api/v1/namespaces/proxy-3606/services/http:proxy-service-vql68:portname2/proxy/: bar (200; 7.516308ms)
Aug 21 06:36:32.514: INFO: (3) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:162/proxy/: bar (200; 7.642454ms)
Aug 21 06:36:32.514: INFO: (3) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:1080/proxy/: ... (200; 7.631687ms)
Aug 21 06:36:32.518: INFO: (4) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:462/proxy/: tls qux (200; 4.324514ms)
Aug 21 06:36:32.519: INFO: (4) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:160/proxy/: foo (200; 4.945399ms)
Aug 21 06:36:32.519: INFO: (4) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:443/proxy/: test (200; 6.435892ms)
Aug 21 06:36:32.520: INFO: (4) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:162/proxy/: bar (200; 6.29094ms)
Aug 21 06:36:32.521: INFO: (4) /api/v1/namespaces/proxy-3606/services/https:proxy-service-vql68:tlsportname2/proxy/: tls qux (200; 6.619464ms)
Aug 21 06:36:32.521: INFO: (4) /api/v1/namespaces/proxy-3606/services/proxy-service-vql68:portname2/proxy/: bar (200; 6.664284ms)
Aug 21 06:36:32.521: INFO: (4) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:1080/proxy/: test<... (200; 6.643129ms)
Aug 21 06:36:32.521: INFO: (4) /api/v1/namespaces/proxy-3606/services/http:proxy-service-vql68:portname1/proxy/: foo (200; 6.708772ms)
Aug 21 06:36:32.521: INFO: (4) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:460/proxy/: tls baz (200; 6.89704ms)
Aug 21 06:36:32.521: INFO: (4) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:1080/proxy/: ... (200; 7.052001ms)
Aug 21 06:36:32.521: INFO: (4) /api/v1/namespaces/proxy-3606/services/proxy-service-vql68:portname1/proxy/: foo (200; 7.010199ms)
Aug 21 06:36:32.620: INFO: (4) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:162/proxy/: bar (200; 106.051619ms)
Aug 21 06:36:32.622: INFO: (4) /api/v1/namespaces/proxy-3606/services/https:proxy-service-vql68:tlsportname1/proxy/: tls baz (200; 107.168439ms)
Aug 21 06:36:32.653: INFO: (5) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:462/proxy/: tls qux (200; 31.441374ms)
Aug 21 06:36:32.697: INFO: (5) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:1080/proxy/: test<... (200; 74.669893ms)
Aug 21 06:36:32.698: INFO: (5) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:1080/proxy/: ... (200; 75.801767ms)
Aug 21 06:36:32.698: INFO: (5) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:162/proxy/: bar (200; 75.758186ms)
Aug 21 06:36:32.698: INFO: (5) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:160/proxy/: foo (200; 75.888172ms)
Aug 21 06:36:32.698: INFO: (5) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:460/proxy/: tls baz (200; 75.553216ms)
Aug 21 06:36:32.698: INFO: (5) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:443/proxy/: test (200; 75.442578ms)
Aug 21 06:36:32.704: INFO: (5) /api/v1/namespaces/proxy-3606/services/https:proxy-service-vql68:tlsportname2/proxy/: tls qux (200; 81.912073ms)
Aug 21 06:36:32.704: INFO: (5) /api/v1/namespaces/proxy-3606/services/http:proxy-service-vql68:portname2/proxy/: bar (200; 81.289488ms)
Aug 21 06:36:32.704: INFO: (5) /api/v1/namespaces/proxy-3606/services/proxy-service-vql68:portname2/proxy/: bar (200; 82.608915ms)
Aug 21 06:36:32.705: INFO: (5) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:162/proxy/: bar (200; 82.330668ms)
Aug 21 06:36:32.705: INFO: (5) /api/v1/namespaces/proxy-3606/services/proxy-service-vql68:portname1/proxy/: foo (200; 81.905079ms)
Aug 21 06:36:32.705: INFO: (5) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:160/proxy/: foo (200; 82.550397ms)
Aug 21 06:36:32.708: INFO: (5) /api/v1/namespaces/proxy-3606/services/http:proxy-service-vql68:portname1/proxy/: foo (200; 85.236766ms)
Aug 21 06:36:32.708: INFO: (5) /api/v1/namespaces/proxy-3606/services/https:proxy-service-vql68:tlsportname1/proxy/: tls baz (200; 85.304178ms)
Aug 21 06:36:32.781: INFO: (6) /api/v1/namespaces/proxy-3606/services/https:proxy-service-vql68:tlsportname1/proxy/: tls baz (200; 73.27163ms)
Aug 21 06:36:32.782: INFO: (6) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:460/proxy/: tls baz (200; 72.77988ms)
Aug 21 06:36:32.782: INFO: (6) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:162/proxy/: bar (200; 73.173642ms)
Aug 21 06:36:32.782: INFO: (6) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:1080/proxy/: ... (200; 73.314737ms)
Aug 21 06:36:32.782: INFO: (6) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:160/proxy/: foo (200; 73.524583ms)
Aug 21 06:36:32.783: INFO: (6) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:162/proxy/: bar (200; 73.802512ms)
Aug 21 06:36:32.783: INFO: (6) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:1080/proxy/: test<... (200; 74.109605ms)
Aug 21 06:36:32.783: INFO: (6) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:160/proxy/: foo (200; 74.665577ms)
Aug 21 06:36:32.783: INFO: (6) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6/proxy/: test (200; 74.801966ms)
Aug 21 06:36:32.784: INFO: (6) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:443/proxy/: test<... (200; 38.046973ms)
Aug 21 06:36:32.824: INFO: (7) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:1080/proxy/: ... (200; 38.593442ms)
Aug 21 06:36:32.824: INFO: (7) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:443/proxy/: test (200; 38.783756ms)
Aug 21 06:36:32.825: INFO: (7) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:462/proxy/: tls qux (200; 38.926365ms)
Aug 21 06:36:32.825: INFO: (7) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:160/proxy/: foo (200; 40.297564ms)
Aug 21 06:36:32.827: INFO: (7) /api/v1/namespaces/proxy-3606/services/https:proxy-service-vql68:tlsportname2/proxy/: tls qux (200; 41.561337ms)
Aug 21 06:36:32.832: INFO: (7) /api/v1/namespaces/proxy-3606/services/http:proxy-service-vql68:portname1/proxy/: foo (200; 46.016149ms)
Aug 21 06:36:32.832: INFO: (7) /api/v1/namespaces/proxy-3606/services/proxy-service-vql68:portname1/proxy/: foo (200; 46.450719ms)
Aug 21 06:36:32.832: INFO: (7) /api/v1/namespaces/proxy-3606/services/http:proxy-service-vql68:portname2/proxy/: bar (200; 46.952702ms)
Aug 21 06:36:32.833: INFO: (7) /api/v1/namespaces/proxy-3606/services/https:proxy-service-vql68:tlsportname1/proxy/: tls baz (200; 47.419311ms)
Aug 21 06:36:32.833: INFO: (7) /api/v1/namespaces/proxy-3606/services/proxy-service-vql68:portname2/proxy/: bar (200; 47.134796ms)
Aug 21 06:36:32.839: INFO: (8) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:162/proxy/: bar (200; 6.4046ms)
Aug 21 06:36:32.841: INFO: (8) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6/proxy/: test (200; 7.639977ms)
Aug 21 06:36:32.841: INFO: (8) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:443/proxy/: test<... (200; 10.859911ms)
Aug 21 06:36:32.844: INFO: (8) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:160/proxy/: foo (200; 11.291868ms)
Aug 21 06:36:32.844: INFO: (8) /api/v1/namespaces/proxy-3606/services/https:proxy-service-vql68:tlsportname1/proxy/: tls baz (200; 10.968429ms)
Aug 21 06:36:32.844: INFO: (8) /api/v1/namespaces/proxy-3606/services/proxy-service-vql68:portname2/proxy/: bar (200; 11.524028ms)
Aug 21 06:36:32.844: INFO: (8) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:1080/proxy/: ... (200; 11.279442ms)
Aug 21 06:36:32.845: INFO: (8) /api/v1/namespaces/proxy-3606/services/http:proxy-service-vql68:portname2/proxy/: bar (200; 11.298747ms)
Aug 21 06:36:32.845: INFO: (8) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:160/proxy/: foo (200; 11.191524ms)
Aug 21 06:36:32.845: INFO: (8) /api/v1/namespaces/proxy-3606/services/proxy-service-vql68:portname1/proxy/: foo (200; 11.781388ms)
Aug 21 06:36:32.846: INFO: (8) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:462/proxy/: tls qux (200; 12.394896ms)
Aug 21 06:36:32.850: INFO: (9) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6/proxy/: test (200; 4.299242ms)
Aug 21 06:36:32.853: INFO: (9) /api/v1/namespaces/proxy-3606/services/proxy-service-vql68:portname2/proxy/: bar (200; 6.487555ms)
Aug 21 06:36:32.853: INFO: (9) /api/v1/namespaces/proxy-3606/services/https:proxy-service-vql68:tlsportname1/proxy/: tls baz (200; 6.443251ms)
Aug 21 06:36:32.853: INFO: (9) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:162/proxy/: bar (200; 6.601477ms)
Aug 21 06:36:32.853: INFO: (9) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:462/proxy/: tls qux (200; 7.254838ms)
Aug 21 06:36:32.853: INFO: (9) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:160/proxy/: foo (200; 7.107883ms)
Aug 21 06:36:32.853: INFO: (9) /api/v1/namespaces/proxy-3606/services/proxy-service-vql68:portname1/proxy/: foo (200; 7.308875ms)
Aug 21 06:36:32.854: INFO: (9) /api/v1/namespaces/proxy-3606/services/http:proxy-service-vql68:portname1/proxy/: foo (200; 7.765994ms)
Aug 21 06:36:32.854: INFO: (9) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:1080/proxy/: test<... (200; 7.615916ms)
Aug 21 06:36:32.854: INFO: (9) /api/v1/namespaces/proxy-3606/services/https:proxy-service-vql68:tlsportname2/proxy/: tls qux (200; 7.848685ms)
Aug 21 06:36:32.854: INFO: (9) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:443/proxy/: ... (200; 8.806422ms)
Aug 21 06:36:32.857: INFO: (9) /api/v1/namespaces/proxy-3606/services/http:proxy-service-vql68:portname2/proxy/: bar (200; 10.274784ms)
Aug 21 06:36:32.860: INFO: (10) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6/proxy/: test (200; 3.651234ms)
Aug 21 06:36:32.861: INFO: (10) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:162/proxy/: bar (200; 4.249899ms)
Aug 21 06:36:32.862: INFO: (10) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:1080/proxy/: test<... (200; 5.308243ms)
Aug 21 06:36:32.863: INFO: (10) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:162/proxy/: bar (200; 5.684074ms)
Aug 21 06:36:32.863: INFO: (10) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:160/proxy/: foo (200; 6.014957ms)
Aug 21 06:36:32.863: INFO: (10) /api/v1/namespaces/proxy-3606/services/proxy-service-vql68:portname2/proxy/: bar (200; 6.316765ms)
Aug 21 06:36:32.863: INFO: (10) /api/v1/namespaces/proxy-3606/services/http:proxy-service-vql68:portname2/proxy/: bar (200; 6.585079ms)
Aug 21 06:36:32.864: INFO: (10) /api/v1/namespaces/proxy-3606/services/http:proxy-service-vql68:portname1/proxy/: foo (200; 6.623815ms)
Aug 21 06:36:32.864: INFO: (10) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:443/proxy/: ... (200; 7.445008ms)
Aug 21 06:36:32.865: INFO: (10) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:460/proxy/: tls baz (200; 7.4164ms)
Aug 21 06:36:32.865: INFO: (10) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:160/proxy/: foo (200; 7.427672ms)
Aug 21 06:36:32.865: INFO: (10) /api/v1/namespaces/proxy-3606/services/https:proxy-service-vql68:tlsportname2/proxy/: tls qux (200; 7.586147ms)
Aug 21 06:36:32.865: INFO: (10) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:462/proxy/: tls qux (200; 7.460016ms)
Aug 21 06:36:32.866: INFO: (10) /api/v1/namespaces/proxy-3606/services/https:proxy-service-vql68:tlsportname1/proxy/: tls baz (200; 8.271193ms)
Aug 21 06:36:32.869: INFO: (11) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:162/proxy/: bar (200; 3.372669ms)
Aug 21 06:36:32.872: INFO: (11) /api/v1/namespaces/proxy-3606/services/https:proxy-service-vql68:tlsportname1/proxy/: tls baz (200; 5.776436ms)
Aug 21 06:36:32.872: INFO: (11) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:160/proxy/: foo (200; 5.592732ms)
Aug 21 06:36:32.872: INFO: (11) /api/v1/namespaces/proxy-3606/services/proxy-service-vql68:portname2/proxy/: bar (200; 6.360158ms)
Aug 21 06:36:32.872: INFO: (11) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6/proxy/: test (200; 5.892263ms)
Aug 21 06:36:32.872: INFO: (11) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:460/proxy/: tls baz (200; 6.386159ms)
Aug 21 06:36:32.873: INFO: (11) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:443/proxy/: ... (200; 7.247082ms)
Aug 21 06:36:32.874: INFO: (11) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:1080/proxy/: test<... (200; 7.397995ms)
Aug 21 06:36:32.874: INFO: (11) /api/v1/namespaces/proxy-3606/services/http:proxy-service-vql68:portname1/proxy/: foo (200; 7.623185ms)
Aug 21 06:36:32.874: INFO: (11) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:160/proxy/: foo (200; 8.240994ms)
Aug 21 06:36:32.874: INFO: (11) /api/v1/namespaces/proxy-3606/services/https:proxy-service-vql68:tlsportname2/proxy/: tls qux (200; 7.883007ms)
Aug 21 06:36:32.896: INFO: (12) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:160/proxy/: foo (200; 20.68635ms)
Aug 21 06:36:32.896: INFO: (12) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:443/proxy/: test<... (200; 23.008595ms)
Aug 21 06:36:32.898: INFO: (12) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:162/proxy/: bar (200; 23.340565ms)
Aug 21 06:36:32.898: INFO: (12) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6/proxy/: test (200; 23.954821ms)
Aug 21 06:36:32.899: INFO: (12) /api/v1/namespaces/proxy-3606/services/https:proxy-service-vql68:tlsportname1/proxy/: tls baz (200; 24.064039ms)
Aug 21 06:36:32.899: INFO: (12) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:460/proxy/: tls baz (200; 24.334273ms)
Aug 21 06:36:32.899: INFO: (12) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:1080/proxy/: ... (200; 24.259277ms)
Aug 21 06:36:32.906: INFO: (13) /api/v1/namespaces/proxy-3606/services/http:proxy-service-vql68:portname2/proxy/: bar (200; 6.45164ms)
Aug 21 06:36:32.906: INFO: (13) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6/proxy/: test (200; 6.720923ms)
Aug 21 06:36:32.906: INFO: (13) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:443/proxy/: ... (200; 9.198802ms)
Aug 21 06:36:32.909: INFO: (13) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:1080/proxy/: test<... (200; 9.092976ms)
Aug 21 06:36:32.909: INFO: (13) /api/v1/namespaces/proxy-3606/services/https:proxy-service-vql68:tlsportname1/proxy/: tls baz (200; 9.397307ms)
Aug 21 06:36:32.909: INFO: (13) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:462/proxy/: tls qux (200; 9.872997ms)
Aug 21 06:36:32.915: INFO: (14) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:462/proxy/: tls qux (200; 5.789273ms)
Aug 21 06:36:32.916: INFO: (14) /api/v1/namespaces/proxy-3606/services/http:proxy-service-vql68:portname2/proxy/: bar (200; 6.596309ms)
Aug 21 06:36:32.916: INFO: (14) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:162/proxy/: bar (200; 6.734043ms)
Aug 21 06:36:32.916: INFO: (14) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:160/proxy/: foo (200; 7.124672ms)
Aug 21 06:36:32.917: INFO: (14) /api/v1/namespaces/proxy-3606/services/proxy-service-vql68:portname1/proxy/: foo (200; 7.379351ms)
Aug 21 06:36:32.917: INFO: (14) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:443/proxy/: test (200; 7.731663ms)
Aug 21 06:36:32.917: INFO: (14) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:460/proxy/: tls baz (200; 7.793596ms)
Aug 21 06:36:32.917: INFO: (14) /api/v1/namespaces/proxy-3606/services/https:proxy-service-vql68:tlsportname1/proxy/: tls baz (200; 7.785432ms)
Aug 21 06:36:32.917: INFO: (14) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:162/proxy/: bar (200; 7.940873ms)
Aug 21 06:36:32.917: INFO: (14) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:1080/proxy/: test<... (200; 7.873612ms)
Aug 21 06:36:32.918: INFO: (14) /api/v1/namespaces/proxy-3606/services/https:proxy-service-vql68:tlsportname2/proxy/: tls qux (200; 8.191146ms)
Aug 21 06:36:32.918: INFO: (14) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:160/proxy/: foo (200; 8.149068ms)
Aug 21 06:36:32.919: INFO: (14) /api/v1/namespaces/proxy-3606/services/proxy-service-vql68:portname2/proxy/: bar (200; 9.075389ms)
Aug 21 06:36:32.919: INFO: (14) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:1080/proxy/: ... (200; 9.108029ms)
Aug 21 06:36:32.921: INFO: (14) /api/v1/namespaces/proxy-3606/services/http:proxy-service-vql68:portname1/proxy/: foo (200; 11.126781ms)
Aug 21 06:36:32.926: INFO: (15) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:462/proxy/: tls qux (200; 4.598655ms)
Aug 21 06:36:32.926: INFO: (15) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:443/proxy/: ... (200; 5.287316ms)
Aug 21 06:36:32.928: INFO: (15) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:160/proxy/: foo (200; 5.474922ms)
Aug 21 06:36:32.929: INFO: (15) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:162/proxy/: bar (200; 5.445689ms)
Aug 21 06:36:32.929: INFO: (15) /api/v1/namespaces/proxy-3606/services/proxy-service-vql68:portname2/proxy/: bar (200; 6.450693ms)
Aug 21 06:36:32.929: INFO: (15) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:460/proxy/: tls baz (200; 5.539825ms)
Aug 21 06:36:32.929: INFO: (15) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:1080/proxy/: test<... (200; 6.551115ms)
Aug 21 06:36:32.930: INFO: (15) /api/v1/namespaces/proxy-3606/services/http:proxy-service-vql68:portname1/proxy/: foo (200; 5.790622ms)
Aug 21 06:36:32.930: INFO: (15) /api/v1/namespaces/proxy-3606/services/proxy-service-vql68:portname1/proxy/: foo (200; 6.319941ms)
Aug 21 06:36:32.930: INFO: (15) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6/proxy/: test (200; 8.975228ms)
Aug 21 06:36:32.930: INFO: (15) /api/v1/namespaces/proxy-3606/services/http:proxy-service-vql68:portname2/proxy/: bar (200; 6.199108ms)
Aug 21 06:36:32.930: INFO: (15) /api/v1/namespaces/proxy-3606/services/https:proxy-service-vql68:tlsportname1/proxy/: tls baz (200; 6.386881ms)
Aug 21 06:36:32.937: INFO: (16) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:160/proxy/: foo (200; 6.588466ms)
Aug 21 06:36:32.940: INFO: (16) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:460/proxy/: tls baz (200; 8.589117ms)
Aug 21 06:36:32.941: INFO: (16) /api/v1/namespaces/proxy-3606/services/http:proxy-service-vql68:portname2/proxy/: bar (200; 9.185936ms)
Aug 21 06:36:32.941: INFO: (16) /api/v1/namespaces/proxy-3606/services/http:proxy-service-vql68:portname1/proxy/: foo (200; 9.994644ms)
Aug 21 06:36:32.941: INFO: (16) /api/v1/namespaces/proxy-3606/services/proxy-service-vql68:portname1/proxy/: foo (200; 10.2173ms)
Aug 21 06:36:32.941: INFO: (16) /api/v1/namespaces/proxy-3606/services/https:proxy-service-vql68:tlsportname2/proxy/: tls qux (200; 9.136183ms)
Aug 21 06:36:32.941: INFO: (16) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:443/proxy/: test (200; 9.958285ms)
Aug 21 06:36:32.942: INFO: (16) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:162/proxy/: bar (200; 11.655935ms)
Aug 21 06:36:32.942: INFO: (16) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:1080/proxy/: ... (200; 10.701936ms)
Aug 21 06:36:32.943: INFO: (16) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:1080/proxy/: test<... (200; 11.035872ms)
Aug 21 06:36:32.943: INFO: (16) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:462/proxy/: tls qux (200; 9.772838ms)
Aug 21 06:36:32.946: INFO: (17) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:162/proxy/: bar (200; 3.223035ms)
Aug 21 06:36:32.948: INFO: (17) /api/v1/namespaces/proxy-3606/services/https:proxy-service-vql68:tlsportname2/proxy/: tls qux (200; 4.651043ms)
Aug 21 06:36:32.949: INFO: (17) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:443/proxy/: test<... (200; 6.786937ms)
Aug 21 06:36:32.950: INFO: (17) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6/proxy/: test (200; 7.390539ms)
Aug 21 06:36:32.951: INFO: (17) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:462/proxy/: tls qux (200; 7.648932ms)
Aug 21 06:36:32.951: INFO: (17) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:1080/proxy/: ... (200; 7.402138ms)
Aug 21 06:36:32.951: INFO: (17) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:460/proxy/: tls baz (200; 7.607759ms)
Aug 21 06:36:32.951: INFO: (17) /api/v1/namespaces/proxy-3606/services/http:proxy-service-vql68:portname1/proxy/: foo (200; 7.70872ms)
Aug 21 06:36:32.951: INFO: (17) /api/v1/namespaces/proxy-3606/services/http:proxy-service-vql68:portname2/proxy/: bar (200; 7.923776ms)
Aug 21 06:36:32.957: INFO: (18) /api/v1/namespaces/proxy-3606/services/proxy-service-vql68:portname2/proxy/: bar (200; 5.744326ms)
Aug 21 06:36:32.960: INFO: (18) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:160/proxy/: foo (200; 8.120727ms)
Aug 21 06:36:32.960: INFO: (18) /api/v1/namespaces/proxy-3606/services/http:proxy-service-vql68:portname2/proxy/: bar (200; 8.519481ms)
Aug 21 06:36:32.960: INFO: (18) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6/proxy/: test (200; 8.718503ms)
Aug 21 06:36:32.960: INFO: (18) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:443/proxy/: test<... (200; 9.22781ms)
Aug 21 06:36:32.961: INFO: (18) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:462/proxy/: tls qux (200; 9.435458ms)
Aug 21 06:36:32.961: INFO: (18) /api/v1/namespaces/proxy-3606/services/http:proxy-service-vql68:portname1/proxy/: foo (200; 9.393289ms)
Aug 21 06:36:32.961: INFO: (18) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:162/proxy/: bar (200; 9.343645ms)
Aug 21 06:36:32.961: INFO: (18) /api/v1/namespaces/proxy-3606/services/proxy-service-vql68:portname1/proxy/: foo (200; 9.402003ms)
Aug 21 06:36:32.961: INFO: (18) /api/v1/namespaces/proxy-3606/services/https:proxy-service-vql68:tlsportname2/proxy/: tls qux (200; 9.369228ms)
Aug 21 06:36:32.961: INFO: (18) /api/v1/namespaces/proxy-3606/services/https:proxy-service-vql68:tlsportname1/proxy/: tls baz (200; 9.462855ms)
Aug 21 06:36:32.961: INFO: (18) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:160/proxy/: foo (200; 9.488925ms)
Aug 21 06:36:32.961: INFO: (18) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:1080/proxy/: ... (200; 9.472375ms)
Aug 21 06:36:32.965: INFO: (19) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:160/proxy/: foo (200; 3.747073ms)
Aug 21 06:36:32.966: INFO: (19) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:1080/proxy/: test<... (200; 4.295439ms)
Aug 21 06:36:32.967: INFO: (19) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:1080/proxy/: ... (200; 5.171582ms)
Aug 21 06:36:32.967: INFO: (19) /api/v1/namespaces/proxy-3606/services/http:proxy-service-vql68:portname1/proxy/: foo (200; 5.294965ms)
Aug 21 06:36:32.967: INFO: (19) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:460/proxy/: tls baz (200; 5.925728ms)
Aug 21 06:36:32.968: INFO: (19) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:162/proxy/: bar (200; 6.977823ms)
Aug 21 06:36:32.969: INFO: (19) /api/v1/namespaces/proxy-3606/services/proxy-service-vql68:portname2/proxy/: bar (200; 7.334404ms)
Aug 21 06:36:32.969: INFO: (19) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:443/proxy/: test (200; 7.518859ms)
Aug 21 06:36:32.969: INFO: (19) /api/v1/namespaces/proxy-3606/services/http:proxy-service-vql68:portname2/proxy/: bar (200; 7.772724ms)
Aug 21 06:36:32.969: INFO: (19) /api/v1/namespaces/proxy-3606/pods/https:proxy-service-vql68-xbcc6:462/proxy/: tls qux (200; 7.762762ms)
Aug 21 06:36:32.970: INFO: (19) /api/v1/namespaces/proxy-3606/services/https:proxy-service-vql68:tlsportname2/proxy/: tls qux (200; 8.113717ms)
Aug 21 06:36:32.970: INFO: (19) /api/v1/namespaces/proxy-3606/pods/proxy-service-vql68-xbcc6:162/proxy/: bar (200; 8.526974ms)
Aug 21 06:36:32.971: INFO: (19) /api/v1/namespaces/proxy-3606/services/https:proxy-service-vql68:tlsportname1/proxy/: tls baz (200; 9.050425ms)
Aug 21 06:36:32.971: INFO: (19) /api/v1/namespaces/proxy-3606/pods/http:proxy-service-vql68-xbcc6:160/proxy/: foo (200; 9.086228ms)
STEP: deleting ReplicationController proxy-service-vql68 in namespace proxy-3606, will wait for the garbage collector to delete the pods
Aug 21 06:36:33.031: INFO: Deleting ReplicationController proxy-service-vql68 took: 6.811225ms
Aug 21 06:36:33.132: INFO: Terminating ReplicationController proxy-service-vql68 pods took: 100.756952ms
[AfterEach] version v1
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:36:35.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-3606" for this suite.

• [SLOW TEST:9.188 seconds]
[sig-network] Proxy
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59
    should proxy through a service and a pod  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":275,"completed":122,"skipped":1974,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:36:35.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 06:36:46.905: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 06:36:48.926: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588606, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588606, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588606, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588606, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 06:36:50.933: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588606, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588606, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588606, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588606, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 06:36:53.976: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 06:36:53.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3545-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:36:55.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8654" for this suite.
STEP: Destroying namespace "webhook-8654-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:19.799 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":123,"skipped":1980,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:36:55.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9288 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9288;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9288 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9288;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9288.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9288.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9288.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9288.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9288.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9288.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9288.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9288.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9288.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9288.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9288.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9288.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9288.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 33.209.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.209.33_udp@PTR;check="$$(dig +tcp +noall +answer +search 33.209.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.209.33_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9288 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9288;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9288 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9288;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9288.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9288.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9288.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9288.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9288.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9288.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9288.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9288.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9288.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9288.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9288.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9288.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9288.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 33.209.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.209.33_udp@PTR;check="$$(dig +tcp +noall +answer +search 33.209.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.209.33_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 21 06:37:02.233: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:02.239: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:02.243: INFO: Unable to read wheezy_udp@dns-test-service.dns-9288 from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:02.247: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9288 from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:02.250: INFO: Unable to read wheezy_udp@dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:02.254: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:02.258: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:02.262: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:02.289: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:02.293: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:02.297: INFO: Unable to read jessie_udp@dns-test-service.dns-9288 from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:02.301: INFO: Unable to read jessie_tcp@dns-test-service.dns-9288 from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:02.305: INFO: Unable to read jessie_udp@dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:02.309: INFO: Unable to read jessie_tcp@dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:02.313: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:02.317: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:02.343: INFO: Lookups using dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9288 wheezy_tcp@dns-test-service.dns-9288 wheezy_udp@dns-test-service.dns-9288.svc wheezy_tcp@dns-test-service.dns-9288.svc wheezy_udp@_http._tcp.dns-test-service.dns-9288.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9288.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9288 jessie_tcp@dns-test-service.dns-9288 jessie_udp@dns-test-service.dns-9288.svc jessie_tcp@dns-test-service.dns-9288.svc jessie_udp@_http._tcp.dns-test-service.dns-9288.svc jessie_tcp@_http._tcp.dns-test-service.dns-9288.svc]

Aug 21 06:37:07.350: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:07.355: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:07.360: INFO: Unable to read wheezy_udp@dns-test-service.dns-9288 from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:07.364: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9288 from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:07.369: INFO: Unable to read wheezy_udp@dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:07.374: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:07.379: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:07.382: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:07.413: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:07.417: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:07.425: INFO: Unable to read jessie_udp@dns-test-service.dns-9288 from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:07.431: INFO: Unable to read jessie_tcp@dns-test-service.dns-9288 from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:07.437: INFO: Unable to read jessie_udp@dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:07.441: INFO: Unable to read jessie_tcp@dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:07.444: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:07.447: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:07.467: INFO: Lookups using dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9288 wheezy_tcp@dns-test-service.dns-9288 wheezy_udp@dns-test-service.dns-9288.svc wheezy_tcp@dns-test-service.dns-9288.svc wheezy_udp@_http._tcp.dns-test-service.dns-9288.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9288.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9288 jessie_tcp@dns-test-service.dns-9288 jessie_udp@dns-test-service.dns-9288.svc jessie_tcp@dns-test-service.dns-9288.svc jessie_udp@_http._tcp.dns-test-service.dns-9288.svc jessie_tcp@_http._tcp.dns-test-service.dns-9288.svc]

Aug 21 06:37:12.350: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:12.356: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:12.361: INFO: Unable to read wheezy_udp@dns-test-service.dns-9288 from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:12.365: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9288 from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:12.370: INFO: Unable to read wheezy_udp@dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:12.375: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:12.379: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:12.384: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:12.416: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:12.421: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:12.441: INFO: Unable to read jessie_udp@dns-test-service.dns-9288 from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:12.447: INFO: Unable to read jessie_tcp@dns-test-service.dns-9288 from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:12.451: INFO: Unable to read jessie_udp@dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:12.455: INFO: Unable to read jessie_tcp@dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:12.458: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:12.462: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:12.485: INFO: Lookups using dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9288 wheezy_tcp@dns-test-service.dns-9288 wheezy_udp@dns-test-service.dns-9288.svc wheezy_tcp@dns-test-service.dns-9288.svc wheezy_udp@_http._tcp.dns-test-service.dns-9288.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9288.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9288 jessie_tcp@dns-test-service.dns-9288 jessie_udp@dns-test-service.dns-9288.svc jessie_tcp@dns-test-service.dns-9288.svc jessie_udp@_http._tcp.dns-test-service.dns-9288.svc jessie_tcp@_http._tcp.dns-test-service.dns-9288.svc]

Aug 21 06:37:17.350: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:17.356: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:17.361: INFO: Unable to read wheezy_udp@dns-test-service.dns-9288 from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:17.366: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9288 from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:17.370: INFO: Unable to read wheezy_udp@dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:17.374: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:17.378: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:17.382: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:17.413: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:17.417: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:17.421: INFO: Unable to read jessie_udp@dns-test-service.dns-9288 from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:17.426: INFO: Unable to read jessie_tcp@dns-test-service.dns-9288 from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:17.430: INFO: Unable to read jessie_udp@dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:17.434: INFO: Unable to read jessie_tcp@dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:17.439: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:17.443: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:17.470: INFO: Lookups using dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9288 wheezy_tcp@dns-test-service.dns-9288 wheezy_udp@dns-test-service.dns-9288.svc wheezy_tcp@dns-test-service.dns-9288.svc wheezy_udp@_http._tcp.dns-test-service.dns-9288.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9288.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9288 jessie_tcp@dns-test-service.dns-9288 jessie_udp@dns-test-service.dns-9288.svc jessie_tcp@dns-test-service.dns-9288.svc jessie_udp@_http._tcp.dns-test-service.dns-9288.svc jessie_tcp@_http._tcp.dns-test-service.dns-9288.svc]

Aug 21 06:37:22.349: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:22.353: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:22.357: INFO: Unable to read wheezy_udp@dns-test-service.dns-9288 from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:22.361: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9288 from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:22.365: INFO: Unable to read wheezy_udp@dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:22.369: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:22.372: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:22.376: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:22.405: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:22.409: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:22.414: INFO: Unable to read jessie_udp@dns-test-service.dns-9288 from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:22.418: INFO: Unable to read jessie_tcp@dns-test-service.dns-9288 from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:22.423: INFO: Unable to read jessie_udp@dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:22.428: INFO: Unable to read jessie_tcp@dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:22.433: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:22.437: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:22.467: INFO: Lookups using dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9288 wheezy_tcp@dns-test-service.dns-9288 wheezy_udp@dns-test-service.dns-9288.svc wheezy_tcp@dns-test-service.dns-9288.svc wheezy_udp@_http._tcp.dns-test-service.dns-9288.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9288.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9288 jessie_tcp@dns-test-service.dns-9288 jessie_udp@dns-test-service.dns-9288.svc jessie_tcp@dns-test-service.dns-9288.svc jessie_udp@_http._tcp.dns-test-service.dns-9288.svc jessie_tcp@_http._tcp.dns-test-service.dns-9288.svc]

Aug 21 06:37:27.351: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:27.357: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:27.362: INFO: Unable to read wheezy_udp@dns-test-service.dns-9288 from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:27.367: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9288 from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:27.371: INFO: Unable to read wheezy_udp@dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:27.376: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:27.381: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:27.385: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:27.417: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:27.421: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:27.424: INFO: Unable to read jessie_udp@dns-test-service.dns-9288 from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:27.428: INFO: Unable to read jessie_tcp@dns-test-service.dns-9288 from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:27.432: INFO: Unable to read jessie_udp@dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:27.436: INFO: Unable to read jessie_tcp@dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:27.441: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:27.446: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9288.svc from pod dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914: the server could not find the requested resource (get pods dns-test-d9f50506-c485-495b-8599-f4fb93a19914)
Aug 21 06:37:27.474: INFO: Lookups using dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9288 wheezy_tcp@dns-test-service.dns-9288 wheezy_udp@dns-test-service.dns-9288.svc wheezy_tcp@dns-test-service.dns-9288.svc wheezy_udp@_http._tcp.dns-test-service.dns-9288.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9288.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9288 jessie_tcp@dns-test-service.dns-9288 jessie_udp@dns-test-service.dns-9288.svc jessie_tcp@dns-test-service.dns-9288.svc jessie_udp@_http._tcp.dns-test-service.dns-9288.svc jessie_tcp@_http._tcp.dns-test-service.dns-9288.svc]

Aug 21 06:37:32.454: INFO: DNS probes using dns-9288/dns-test-d9f50506-c485-495b-8599-f4fb93a19914 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:37:33.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9288" for this suite.

• [SLOW TEST:38.151 seconds]
[sig-network] DNS
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":124,"skipped":1998,"failed":0}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:37:33.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 06:37:38.492: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 06:37:40.513: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588658, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588658, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588658, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588658, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 06:37:43.548: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:37:43.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9422" for this suite.
STEP: Destroying namespace "webhook-9422-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.423 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":125,"skipped":1998,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:37:43.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-6fd9930d-3452-4c23-9c24-59acfaa463d5
STEP: Creating configMap with name cm-test-opt-upd-0c3e25f2-cd7b-4c6f-b860-c2c4995471bf
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-6fd9930d-3452-4c23-9c24-59acfaa463d5
STEP: Updating configmap cm-test-opt-upd-0c3e25f2-cd7b-4c6f-b860-c2c4995471bf
STEP: Creating configMap with name cm-test-opt-create-51cb8a2e-8a34-4b45-8a0e-a176c9820ecc
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:37:54.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4719" for this suite.

• [SLOW TEST:10.313 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":126,"skipped":2009,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:37:54.146: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 06:37:54.232: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:38:01.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4557" for this suite.

• [SLOW TEST:7.025 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
    listing custom resource definition objects works  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":275,"completed":127,"skipped":2015,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:38:01.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 21 06:38:05.911: INFO: Successfully updated pod "pod-update-6a95efcd-9c18-4da9-98ee-3fe214878fda"
STEP: verifying the updated pod is in kubernetes
Aug 21 06:38:05.939: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:38:05.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4999" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":128,"skipped":2033,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Lease
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:38:05.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Lease
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:38:06.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-2648" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":129,"skipped":2052,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:38:06.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 06:38:06.279: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Aug 21 06:38:06.328: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:38:06.344: INFO: Number of nodes with available pods: 0
Aug 21 06:38:06.345: INFO: Node kali-worker is running more than one daemon pod
Aug 21 06:38:07.357: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:38:07.363: INFO: Number of nodes with available pods: 0
Aug 21 06:38:07.363: INFO: Node kali-worker is running more than one daemon pod
Aug 21 06:38:08.438: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:38:08.682: INFO: Number of nodes with available pods: 0
Aug 21 06:38:08.682: INFO: Node kali-worker is running more than one daemon pod
Aug 21 06:38:09.355: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:38:09.360: INFO: Number of nodes with available pods: 0
Aug 21 06:38:09.360: INFO: Node kali-worker is running more than one daemon pod
Aug 21 06:38:10.356: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:38:10.362: INFO: Number of nodes with available pods: 1
Aug 21 06:38:10.362: INFO: Node kali-worker is running more than one daemon pod
Aug 21 06:38:11.393: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:38:11.420: INFO: Number of nodes with available pods: 2
Aug 21 06:38:11.420: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Aug 21 06:38:11.612: INFO: Wrong image for pod: daemon-set-2st64. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 06:38:11.612: INFO: Wrong image for pod: daemon-set-bnpnc. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 06:38:11.652: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:38:12.661: INFO: Wrong image for pod: daemon-set-2st64. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 06:38:12.661: INFO: Wrong image for pod: daemon-set-bnpnc. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 06:38:12.759: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:38:13.676: INFO: Wrong image for pod: daemon-set-2st64. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 06:38:13.677: INFO: Pod daemon-set-2st64 is not available
Aug 21 06:38:13.677: INFO: Wrong image for pod: daemon-set-bnpnc. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 06:38:13.684: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:38:14.683: INFO: Wrong image for pod: daemon-set-bnpnc. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 06:38:14.683: INFO: Pod daemon-set-fr6s7 is not available
Aug 21 06:38:14.693: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:38:15.661: INFO: Wrong image for pod: daemon-set-bnpnc. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 06:38:15.661: INFO: Pod daemon-set-fr6s7 is not available
Aug 21 06:38:15.668: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:38:16.670: INFO: Wrong image for pod: daemon-set-bnpnc. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 06:38:16.671: INFO: Pod daemon-set-fr6s7 is not available
Aug 21 06:38:16.680: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:38:17.660: INFO: Wrong image for pod: daemon-set-bnpnc. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 06:38:17.667: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:38:18.658: INFO: Wrong image for pod: daemon-set-bnpnc. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 06:38:18.659: INFO: Pod daemon-set-bnpnc is not available
Aug 21 06:38:18.672: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:38:19.674: INFO: Pod daemon-set-hcf6n is not available
Aug 21 06:38:19.740: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Aug 21 06:38:19.749: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:38:19.756: INFO: Number of nodes with available pods: 1
Aug 21 06:38:19.756: INFO: Node kali-worker is running more than one daemon pod
Aug 21 06:38:20.774: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:38:20.780: INFO: Number of nodes with available pods: 1
Aug 21 06:38:20.780: INFO: Node kali-worker is running more than one daemon pod
Aug 21 06:38:21.809: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:38:21.813: INFO: Number of nodes with available pods: 1
Aug 21 06:38:21.813: INFO: Node kali-worker is running more than one daemon pod
Aug 21 06:38:22.767: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:38:22.774: INFO: Number of nodes with available pods: 2
Aug 21 06:38:22.775: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3101, will wait for the garbage collector to delete the pods
Aug 21 06:38:22.866: INFO: Deleting DaemonSet.extensions daemon-set took: 8.351562ms
Aug 21 06:38:23.167: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.80763ms
Aug 21 06:38:29.274: INFO: Number of nodes with available pods: 0
Aug 21 06:38:29.274: INFO: Number of running nodes: 0, number of available pods: 0
Aug 21 06:38:29.279: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3101/daemonsets","resourceVersion":"2024565"},"items":null}

Aug 21 06:38:29.282: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3101/pods","resourceVersion":"2024565"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:38:29.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3101" for this suite.

• [SLOW TEST:23.193 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":130,"skipped":2105,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:38:29.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:38:41.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4359" for this suite.

• [SLOW TEST:12.090 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":131,"skipped":2121,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:38:41.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-32a10c5f-d6c3-407d-9736-7217563bda6d
STEP: Creating configMap with name cm-test-opt-upd-c619e5eb-b158-4190-a03f-ab7cfcb01e16
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-32a10c5f-d6c3-407d-9736-7217563bda6d
STEP: Updating configmap cm-test-opt-upd-c619e5eb-b158-4190-a03f-ab7cfcb01e16
STEP: Creating configMap with name cm-test-opt-create-184f03c2-1ca9-44be-8fda-75bd632f381c
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:38:51.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1001" for this suite.

• [SLOW TEST:10.509 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":132,"skipped":2171,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:38:51.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 06:39:00.089: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 06:39:02.360: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588740, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588740, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588740, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588740, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 06:39:04.368: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588740, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588740, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588740, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588740, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 06:39:07.395: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:39:07.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3366" for this suite.
STEP: Destroying namespace "webhook-3366-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:15.778 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":133,"skipped":2208,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:39:07.721: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-bf9d2414-01b2-49a7-b1e4-8d696060ae0d in namespace container-probe-2314
Aug 21 06:39:11.882: INFO: Started pod liveness-bf9d2414-01b2-49a7-b1e4-8d696060ae0d in namespace container-probe-2314
STEP: checking the pod's current state and verifying that restartCount is present
Aug 21 06:39:11.889: INFO: Initial restart count of pod liveness-bf9d2414-01b2-49a7-b1e4-8d696060ae0d is 0
Aug 21 06:39:31.968: INFO: Restart count of pod container-probe-2314/liveness-bf9d2414-01b2-49a7-b1e4-8d696060ae0d is now 1 (20.078808469s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:39:32.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2314" for this suite.

• [SLOW TEST:24.347 seconds]
[k8s.io] Probing container
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":134,"skipped":2224,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:39:32.070: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-projected-tncv
STEP: Creating a pod to test atomic-volume-subpath
Aug 21 06:39:32.500: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-tncv" in namespace "subpath-7014" to be "Succeeded or Failed"
Aug 21 06:39:32.553: INFO: Pod "pod-subpath-test-projected-tncv": Phase="Pending", Reason="", readiness=false. Elapsed: 52.65824ms
Aug 21 06:39:34.562: INFO: Pod "pod-subpath-test-projected-tncv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060920633s
Aug 21 06:39:36.570: INFO: Pod "pod-subpath-test-projected-tncv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068975131s
Aug 21 06:39:38.577: INFO: Pod "pod-subpath-test-projected-tncv": Phase="Running", Reason="", readiness=true. Elapsed: 6.076757078s
Aug 21 06:39:40.585: INFO: Pod "pod-subpath-test-projected-tncv": Phase="Running", Reason="", readiness=true. Elapsed: 8.084553709s
Aug 21 06:39:42.593: INFO: Pod "pod-subpath-test-projected-tncv": Phase="Running", Reason="", readiness=true. Elapsed: 10.09278025s
Aug 21 06:39:44.601: INFO: Pod "pod-subpath-test-projected-tncv": Phase="Running", Reason="", readiness=true. Elapsed: 12.100156121s
Aug 21 06:39:46.608: INFO: Pod "pod-subpath-test-projected-tncv": Phase="Running", Reason="", readiness=true. Elapsed: 14.107480881s
Aug 21 06:39:48.616: INFO: Pod "pod-subpath-test-projected-tncv": Phase="Running", Reason="", readiness=true. Elapsed: 16.115129873s
Aug 21 06:39:50.624: INFO: Pod "pod-subpath-test-projected-tncv": Phase="Running", Reason="", readiness=true. Elapsed: 18.122839599s
Aug 21 06:39:52.632: INFO: Pod "pod-subpath-test-projected-tncv": Phase="Running", Reason="", readiness=true. Elapsed: 20.131779931s
Aug 21 06:39:54.641: INFO: Pod "pod-subpath-test-projected-tncv": Phase="Running", Reason="", readiness=true. Elapsed: 22.139936895s
Aug 21 06:39:56.648: INFO: Pod "pod-subpath-test-projected-tncv": Phase="Running", Reason="", readiness=true. Elapsed: 24.147356858s
Aug 21 06:39:58.656: INFO: Pod "pod-subpath-test-projected-tncv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.155515979s
STEP: Saw pod success
Aug 21 06:39:58.657: INFO: Pod "pod-subpath-test-projected-tncv" satisfied condition "Succeeded or Failed"
Aug 21 06:39:58.662: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-projected-tncv container test-container-subpath-projected-tncv: 
STEP: delete the pod
Aug 21 06:39:58.720: INFO: Waiting for pod pod-subpath-test-projected-tncv to disappear
Aug 21 06:39:58.738: INFO: Pod pod-subpath-test-projected-tncv no longer exists
STEP: Deleting pod pod-subpath-test-projected-tncv
Aug 21 06:39:58.738: INFO: Deleting pod "pod-subpath-test-projected-tncv" in namespace "subpath-7014"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:39:58.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7014" for this suite.

• [SLOW TEST:26.687 seconds]
[sig-storage] Subpath
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":135,"skipped":2232,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:39:58.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-e5aba93a-e554-4933-be5e-ce1df5992240
STEP: Creating secret with name s-test-opt-upd-9f0c7f38-cc97-455b-9407-4921d250cabf
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-e5aba93a-e554-4933-be5e-ce1df5992240
STEP: Updating secret s-test-opt-upd-9f0c7f38-cc97-455b-9407-4921d250cabf
STEP: Creating secret with name s-test-opt-create-f5b700ac-91fe-4ab3-9ebd-90e22be8acef
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:40:09.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6791" for this suite.

• [SLOW TEST:10.316 seconds]
[sig-storage] Secrets
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":136,"skipped":2247,"failed":0}
SSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:40:09.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-upd-795658b8-178a-44ed-a14b-8dc46edcf76c
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:40:15.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1860" for this suite.

• [SLOW TEST:6.218 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":137,"skipped":2250,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:40:15.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should find a service from listing all namespaces [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching services
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:40:15.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1897" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":138,"skipped":2264,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:40:15.396: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 21 06:40:15.651: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7edc4f0c-b3fa-4fa3-9a94-b878c7b24d56" in namespace "projected-2423" to be "Succeeded or Failed"
Aug 21 06:40:15.682: INFO: Pod "downwardapi-volume-7edc4f0c-b3fa-4fa3-9a94-b878c7b24d56": Phase="Pending", Reason="", readiness=false. Elapsed: 31.285605ms
Aug 21 06:40:17.689: INFO: Pod "downwardapi-volume-7edc4f0c-b3fa-4fa3-9a94-b878c7b24d56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038149344s
Aug 21 06:40:19.695: INFO: Pod "downwardapi-volume-7edc4f0c-b3fa-4fa3-9a94-b878c7b24d56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043974817s
STEP: Saw pod success
Aug 21 06:40:19.695: INFO: Pod "downwardapi-volume-7edc4f0c-b3fa-4fa3-9a94-b878c7b24d56" satisfied condition "Succeeded or Failed"
Aug 21 06:40:19.701: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-7edc4f0c-b3fa-4fa3-9a94-b878c7b24d56 container client-container: 
STEP: delete the pod
Aug 21 06:40:19.745: INFO: Waiting for pod downwardapi-volume-7edc4f0c-b3fa-4fa3-9a94-b878c7b24d56 to disappear
Aug 21 06:40:19.750: INFO: Pod downwardapi-volume-7edc4f0c-b3fa-4fa3-9a94-b878c7b24d56 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:40:19.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2423" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":139,"skipped":2314,"failed":0}

------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:40:19.782: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-cb115f0b-5c2f-4449-954b-4fa23641080b
STEP: Creating a pod to test consume configMaps
Aug 21 06:40:19.860: INFO: Waiting up to 5m0s for pod "pod-configmaps-a76258d2-97af-42a7-b46d-8da79b03bbaa" in namespace "configmap-4221" to be "Succeeded or Failed"
Aug 21 06:40:19.899: INFO: Pod "pod-configmaps-a76258d2-97af-42a7-b46d-8da79b03bbaa": Phase="Pending", Reason="", readiness=false. Elapsed: 38.468482ms
Aug 21 06:40:21.906: INFO: Pod "pod-configmaps-a76258d2-97af-42a7-b46d-8da79b03bbaa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045192602s
Aug 21 06:40:23.912: INFO: Pod "pod-configmaps-a76258d2-97af-42a7-b46d-8da79b03bbaa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051874036s
Aug 21 06:40:25.930: INFO: Pod "pod-configmaps-a76258d2-97af-42a7-b46d-8da79b03bbaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06891264s
STEP: Saw pod success
Aug 21 06:40:25.930: INFO: Pod "pod-configmaps-a76258d2-97af-42a7-b46d-8da79b03bbaa" satisfied condition "Succeeded or Failed"
Aug 21 06:40:25.935: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-a76258d2-97af-42a7-b46d-8da79b03bbaa container configmap-volume-test: 
STEP: delete the pod
Aug 21 06:40:26.006: INFO: Waiting for pod pod-configmaps-a76258d2-97af-42a7-b46d-8da79b03bbaa to disappear
Aug 21 06:40:26.019: INFO: Pod pod-configmaps-a76258d2-97af-42a7-b46d-8da79b03bbaa no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:40:26.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4221" for this suite.

• [SLOW TEST:6.267 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":140,"skipped":2314,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:40:26.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 06:40:32.837: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 06:40:34.942: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588832, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588832, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588832, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588832, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 06:40:37.987: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Aug 21 06:40:42.105: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config attach --namespace=webhook-282 to-be-attached-pod -i -c=container1'
Aug 21 06:40:43.346: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:40:43.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-282" for this suite.
STEP: Destroying namespace "webhook-282-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:17.410 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":141,"skipped":2318,"failed":0}
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:40:43.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 21 06:40:43.532: INFO: Waiting up to 5m0s for pod "downwardapi-volume-25e6ad59-8c94-4efe-8f35-58c4667a2be6" in namespace "downward-api-4946" to be "Succeeded or Failed"
Aug 21 06:40:43.576: INFO: Pod "downwardapi-volume-25e6ad59-8c94-4efe-8f35-58c4667a2be6": Phase="Pending", Reason="", readiness=false. Elapsed: 43.565917ms
Aug 21 06:40:46.095: INFO: Pod "downwardapi-volume-25e6ad59-8c94-4efe-8f35-58c4667a2be6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.562103088s
Aug 21 06:40:48.103: INFO: Pod "downwardapi-volume-25e6ad59-8c94-4efe-8f35-58c4667a2be6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.570706276s
STEP: Saw pod success
Aug 21 06:40:48.103: INFO: Pod "downwardapi-volume-25e6ad59-8c94-4efe-8f35-58c4667a2be6" satisfied condition "Succeeded or Failed"
Aug 21 06:40:48.133: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-25e6ad59-8c94-4efe-8f35-58c4667a2be6 container client-container: 
STEP: delete the pod
Aug 21 06:40:48.179: INFO: Waiting for pod downwardapi-volume-25e6ad59-8c94-4efe-8f35-58c4667a2be6 to disappear
Aug 21 06:40:48.203: INFO: Pod downwardapi-volume-25e6ad59-8c94-4efe-8f35-58c4667a2be6 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:40:48.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4946" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":142,"skipped":2319,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:40:48.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl run pod
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418
[It] should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 21 06:40:48.658: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-884'
Aug 21 06:40:49.831: INFO: stderr: ""
Aug 21 06:40:49.832: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423
Aug 21 06:40:49.837: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-884'
Aug 21 06:40:59.174: INFO: stderr: ""
Aug 21 06:40:59.174: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:40:59.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-884" for this suite.

• [SLOW TEST:10.984 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414
    should create a pod from an image when restart is Never  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":275,"completed":143,"skipped":2340,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:40:59.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171
[It] should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating server pod server in namespace prestop-1264
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-1264
STEP: Deleting pre-stop pod
Aug 21 06:41:12.391: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:41:12.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-1264" for this suite.

• [SLOW TEST:13.286 seconds]
[k8s.io] [sig-node] PreStop
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":275,"completed":144,"skipped":2352,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:41:12.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 21 06:41:12.926: INFO: Waiting up to 5m0s for pod "pod-71154537-4d99-41fa-9f7f-22715f3be337" in namespace "emptydir-3033" to be "Succeeded or Failed"
Aug 21 06:41:12.945: INFO: Pod "pod-71154537-4d99-41fa-9f7f-22715f3be337": Phase="Pending", Reason="", readiness=false. Elapsed: 19.015919ms
Aug 21 06:41:14.951: INFO: Pod "pod-71154537-4d99-41fa-9f7f-22715f3be337": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025076812s
Aug 21 06:41:17.026: INFO: Pod "pod-71154537-4d99-41fa-9f7f-22715f3be337": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.100497481s
STEP: Saw pod success
Aug 21 06:41:17.027: INFO: Pod "pod-71154537-4d99-41fa-9f7f-22715f3be337" satisfied condition "Succeeded or Failed"
Aug 21 06:41:17.032: INFO: Trying to get logs from node kali-worker pod pod-71154537-4d99-41fa-9f7f-22715f3be337 container test-container: 
STEP: delete the pod
Aug 21 06:41:17.070: INFO: Waiting for pod pod-71154537-4d99-41fa-9f7f-22715f3be337 to disappear
Aug 21 06:41:17.082: INFO: Pod pod-71154537-4d99-41fa-9f7f-22715f3be337 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:41:17.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3033" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":145,"skipped":2369,"failed":0}
SS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:41:17.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating replication controller my-hostname-basic-f14312d8-8cf1-4b2d-a3b8-cab32408ecb2
Aug 21 06:41:17.510: INFO: Pod name my-hostname-basic-f14312d8-8cf1-4b2d-a3b8-cab32408ecb2: Found 0 pods out of 1
Aug 21 06:41:22.517: INFO: Pod name my-hostname-basic-f14312d8-8cf1-4b2d-a3b8-cab32408ecb2: Found 1 pods out of 1
Aug 21 06:41:22.517: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-f14312d8-8cf1-4b2d-a3b8-cab32408ecb2" are running
Aug 21 06:41:22.523: INFO: Pod "my-hostname-basic-f14312d8-8cf1-4b2d-a3b8-cab32408ecb2-4h69v" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 06:41:17 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 06:41:20 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 06:41:20 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 06:41:17 +0000 UTC Reason: Message:}])
Aug 21 06:41:22.524: INFO: Trying to dial the pod
Aug 21 06:41:27.542: INFO: Controller my-hostname-basic-f14312d8-8cf1-4b2d-a3b8-cab32408ecb2: Got expected result from replica 1 [my-hostname-basic-f14312d8-8cf1-4b2d-a3b8-cab32408ecb2-4h69v]: "my-hostname-basic-f14312d8-8cf1-4b2d-a3b8-cab32408ecb2-4h69v", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:41:27.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4416" for this suite.

• [SLOW TEST:10.458 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":275,"completed":146,"skipped":2371,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:41:27.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 06:41:38.687: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 06:41:40.705: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588898, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588898, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588898, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733588898, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 06:41:43.753: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:41:53.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1907" for this suite.
STEP: Destroying namespace "webhook-1907-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:26.486 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":147,"skipped":2402,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:41:54.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create services for rc  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating Agnhost RC
Aug 21 06:41:54.120: INFO: namespace kubectl-2614
Aug 21 06:41:54.121: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2614'
Aug 21 06:41:55.698: INFO: stderr: ""
Aug 21 06:41:55.699: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 21 06:41:56.707: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 06:41:56.707: INFO: Found 0 / 1
Aug 21 06:41:57.798: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 06:41:57.798: INFO: Found 0 / 1
Aug 21 06:41:58.706: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 06:41:58.706: INFO: Found 0 / 1
Aug 21 06:41:59.708: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 06:41:59.708: INFO: Found 1 / 1
Aug 21 06:41:59.708: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 21 06:41:59.714: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 06:41:59.714: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 21 06:41:59.715: INFO: wait on agnhost-master startup in kubectl-2614 
Aug 21 06:41:59.715: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config logs agnhost-master-dkdlv agnhost-master --namespace=kubectl-2614'
Aug 21 06:42:00.879: INFO: stderr: ""
Aug 21 06:42:00.879: INFO: stdout: "Paused\n"
STEP: exposing RC
Aug 21 06:42:00.880: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-2614'
Aug 21 06:42:02.235: INFO: stderr: ""
Aug 21 06:42:02.236: INFO: stdout: "service/rm2 exposed\n"
Aug 21 06:42:02.260: INFO: Service rm2 in namespace kubectl-2614 found.
STEP: exposing service
Aug 21 06:42:04.273: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-2614'
Aug 21 06:42:05.468: INFO: stderr: ""
Aug 21 06:42:05.468: INFO: stdout: "service/rm3 exposed\n"
Aug 21 06:42:05.473: INFO: Service rm3 in namespace kubectl-2614 found.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:42:07.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2614" for this suite.

• [SLOW TEST:13.455 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119
    should create services for rc  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":275,"completed":148,"skipped":2421,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:42:07.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 21 06:42:07.602: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6e0576ef-229a-4a2e-adca-f908d6647604" in namespace "downward-api-2939" to be "Succeeded or Failed"
Aug 21 06:42:07.619: INFO: Pod "downwardapi-volume-6e0576ef-229a-4a2e-adca-f908d6647604": Phase="Pending", Reason="", readiness=false. Elapsed: 17.447632ms
Aug 21 06:42:09.643: INFO: Pod "downwardapi-volume-6e0576ef-229a-4a2e-adca-f908d6647604": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041203148s
Aug 21 06:42:11.650: INFO: Pod "downwardapi-volume-6e0576ef-229a-4a2e-adca-f908d6647604": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048233654s
STEP: Saw pod success
Aug 21 06:42:11.650: INFO: Pod "downwardapi-volume-6e0576ef-229a-4a2e-adca-f908d6647604" satisfied condition "Succeeded or Failed"
Aug 21 06:42:11.679: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-6e0576ef-229a-4a2e-adca-f908d6647604 container client-container: 
STEP: delete the pod
Aug 21 06:42:11.710: INFO: Waiting for pod downwardapi-volume-6e0576ef-229a-4a2e-adca-f908d6647604 to disappear
Aug 21 06:42:11.719: INFO: Pod downwardapi-volume-6e0576ef-229a-4a2e-adca-f908d6647604 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:42:11.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2939" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":149,"skipped":2429,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:42:11.734: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-configmap-s9p6
STEP: Creating a pod to test atomic-volume-subpath
Aug 21 06:42:11.899: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-s9p6" in namespace "subpath-7047" to be "Succeeded or Failed"
Aug 21 06:42:11.945: INFO: Pod "pod-subpath-test-configmap-s9p6": Phase="Pending", Reason="", readiness=false. Elapsed: 45.377139ms
Aug 21 06:42:14.057: INFO: Pod "pod-subpath-test-configmap-s9p6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158074417s
Aug 21 06:42:16.195: INFO: Pod "pod-subpath-test-configmap-s9p6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.296070551s
Aug 21 06:42:18.752: INFO: Pod "pod-subpath-test-configmap-s9p6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.852251603s
Aug 21 06:42:20.759: INFO: Pod "pod-subpath-test-configmap-s9p6": Phase="Running", Reason="", readiness=true. Elapsed: 8.859793399s
Aug 21 06:42:22.767: INFO: Pod "pod-subpath-test-configmap-s9p6": Phase="Running", Reason="", readiness=true. Elapsed: 10.867471259s
Aug 21 06:42:24.775: INFO: Pod "pod-subpath-test-configmap-s9p6": Phase="Running", Reason="", readiness=true. Elapsed: 12.875375857s
Aug 21 06:42:26.817: INFO: Pod "pod-subpath-test-configmap-s9p6": Phase="Running", Reason="", readiness=true. Elapsed: 14.917948385s
Aug 21 06:42:28.823: INFO: Pod "pod-subpath-test-configmap-s9p6": Phase="Running", Reason="", readiness=true. Elapsed: 16.923244262s
Aug 21 06:42:30.828: INFO: Pod "pod-subpath-test-configmap-s9p6": Phase="Running", Reason="", readiness=true. Elapsed: 18.928452962s
Aug 21 06:42:32.833: INFO: Pod "pod-subpath-test-configmap-s9p6": Phase="Running", Reason="", readiness=true. Elapsed: 20.933205857s
Aug 21 06:42:34.839: INFO: Pod "pod-subpath-test-configmap-s9p6": Phase="Running", Reason="", readiness=true. Elapsed: 22.939125385s
Aug 21 06:42:36.844: INFO: Pod "pod-subpath-test-configmap-s9p6": Phase="Running", Reason="", readiness=true. Elapsed: 24.944959259s
Aug 21 06:42:38.852: INFO: Pod "pod-subpath-test-configmap-s9p6": Phase="Running", Reason="", readiness=true. Elapsed: 26.952215087s
Aug 21 06:42:40.858: INFO: Pod "pod-subpath-test-configmap-s9p6": Phase="Running", Reason="", readiness=true. Elapsed: 28.958857218s
Aug 21 06:42:42.865: INFO: Pod "pod-subpath-test-configmap-s9p6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.965894366s
STEP: Saw pod success
Aug 21 06:42:42.866: INFO: Pod "pod-subpath-test-configmap-s9p6" satisfied condition "Succeeded or Failed"
Aug 21 06:42:42.873: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-configmap-s9p6 container test-container-subpath-configmap-s9p6: 
STEP: delete the pod
Aug 21 06:42:42.929: INFO: Waiting for pod pod-subpath-test-configmap-s9p6 to disappear
Aug 21 06:42:42.947: INFO: Pod pod-subpath-test-configmap-s9p6 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-s9p6
Aug 21 06:42:42.947: INFO: Deleting pod "pod-subpath-test-configmap-s9p6" in namespace "subpath-7047"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:42:42.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7047" for this suite.

• [SLOW TEST:31.245 seconds]
[sig-storage] Subpath
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":150,"skipped":2436,"failed":0}
SSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:42:42.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Aug 21 06:42:43.064: INFO: Waiting up to 5m0s for pod "downward-api-42081dea-4ff4-4006-a16f-fb35059a063b" in namespace "downward-api-6268" to be "Succeeded or Failed"
Aug 21 06:42:43.122: INFO: Pod "downward-api-42081dea-4ff4-4006-a16f-fb35059a063b": Phase="Pending", Reason="", readiness=false. Elapsed: 58.607519ms
Aug 21 06:42:45.208: INFO: Pod "downward-api-42081dea-4ff4-4006-a16f-fb35059a063b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144266458s
Aug 21 06:42:47.412: INFO: Pod "downward-api-42081dea-4ff4-4006-a16f-fb35059a063b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.347750753s
STEP: Saw pod success
Aug 21 06:42:47.412: INFO: Pod "downward-api-42081dea-4ff4-4006-a16f-fb35059a063b" satisfied condition "Succeeded or Failed"
Aug 21 06:42:47.418: INFO: Trying to get logs from node kali-worker pod downward-api-42081dea-4ff4-4006-a16f-fb35059a063b container dapi-container: 
STEP: delete the pod
Aug 21 06:42:47.598: INFO: Waiting for pod downward-api-42081dea-4ff4-4006-a16f-fb35059a063b to disappear
Aug 21 06:42:47.606: INFO: Pod downward-api-42081dea-4ff4-4006-a16f-fb35059a063b no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:42:47.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6268" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":151,"skipped":2445,"failed":0}
SS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:42:47.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 21 06:42:51.853: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:42:51.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6187" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":152,"skipped":2447,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:42:51.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Aug 21 06:42:52.097: INFO: Created pod &Pod{ObjectMeta:{dns-9023  dns-9023 /api/v1/namespaces/dns-9023/pods/dns-9023 ad9445ac-eb8b-40e0-8b0b-09e66f96c5af 2026129 0 2020-08-21 06:42:52 +0000 UTC   map[] map[] [] []  [{e2e.test Update v1 2020-08-21 06:42:52 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 67 111 110 102 105 103 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 115 101 114 118 101 114 115 34 58 123 125 44 34 102 58 115 101 97 114 99 104 101 115 34 58 123 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v5jx8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v5jx8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v5jx8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 06:42:52.114: INFO: The status of Pod dns-9023 is Pending, waiting for it to be Running (with Ready = true)
Aug 21 06:42:54.121: INFO: The status of Pod dns-9023 is Pending, waiting for it to be Running (with Ready = true)
Aug 21 06:42:56.121: INFO: The status of Pod dns-9023 is Pending, waiting for it to be Running (with Ready = true)
Aug 21 06:42:58.120: INFO: The status of Pod dns-9023 is Running (Ready = true)
STEP: Verifying customized DNS suffix list is configured on pod...
Aug 21 06:42:58.121: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-9023 PodName:dns-9023 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 06:42:58.121: INFO: >>> kubeConfig: /root/.kube/config
I0821 06:42:58.228111      10 log.go:172] (0xa2e8770) (0xa2e8930) Create stream
I0821 06:42:58.228258      10 log.go:172] (0xa2e8770) (0xa2e8930) Stream added, broadcasting: 1
I0821 06:42:58.232136      10 log.go:172] (0xa2e8770) Reply frame received for 1
I0821 06:42:58.232384      10 log.go:172] (0xa2e8770) (0x772cbd0) Create stream
I0821 06:42:58.232502      10 log.go:172] (0xa2e8770) (0x772cbd0) Stream added, broadcasting: 3
I0821 06:42:58.234206      10 log.go:172] (0xa2e8770) Reply frame received for 3
I0821 06:42:58.234352      10 log.go:172] (0xa2e8770) (0x7b2a460) Create stream
I0821 06:42:58.234432      10 log.go:172] (0xa2e8770) (0x7b2a460) Stream added, broadcasting: 5
I0821 06:42:58.235733      10 log.go:172] (0xa2e8770) Reply frame received for 5
I0821 06:42:58.290988      10 log.go:172] (0xa2e8770) Data frame received for 3
I0821 06:42:58.291200      10 log.go:172] (0x772cbd0) (3) Data frame handling
I0821 06:42:58.291338      10 log.go:172] (0x772cbd0) (3) Data frame sent
I0821 06:42:58.292438      10 log.go:172] (0xa2e8770) Data frame received for 5
I0821 06:42:58.292602      10 log.go:172] (0x7b2a460) (5) Data frame handling
I0821 06:42:58.292795      10 log.go:172] (0xa2e8770) Data frame received for 3
I0821 06:42:58.292896      10 log.go:172] (0x772cbd0) (3) Data frame handling
I0821 06:42:58.294201      10 log.go:172] (0xa2e8770) Data frame received for 1
I0821 06:42:58.294277      10 log.go:172] (0xa2e8930) (1) Data frame handling
I0821 06:42:58.294391      10 log.go:172] (0xa2e8930) (1) Data frame sent
I0821 06:42:58.294499      10 log.go:172] (0xa2e8770) (0xa2e8930) Stream removed, broadcasting: 1
I0821 06:42:58.294644      10 log.go:172] (0xa2e8770) Go away received
I0821 06:42:58.295051      10 log.go:172] (0xa2e8770) (0xa2e8930) Stream removed, broadcasting: 1
I0821 06:42:58.295180      10 log.go:172] (0xa2e8770) (0x772cbd0) Stream removed, broadcasting: 3
I0821 06:42:58.295289      10 log.go:172] (0xa2e8770) (0x7b2a460) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Aug 21 06:42:58.295: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-9023 PodName:dns-9023 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 06:42:58.295: INFO: >>> kubeConfig: /root/.kube/config
I0821 06:42:58.400057      10 log.go:172] (0x93270a0) (0x846ea10) Create stream
I0821 06:42:58.400217      10 log.go:172] (0x93270a0) (0x846ea10) Stream added, broadcasting: 1
I0821 06:42:58.406813      10 log.go:172] (0x93270a0) Reply frame received for 1
I0821 06:42:58.406914      10 log.go:172] (0x93270a0) (0xa2e9110) Create stream
I0821 06:42:58.406964      10 log.go:172] (0x93270a0) (0xa2e9110) Stream added, broadcasting: 3
I0821 06:42:58.407880      10 log.go:172] (0x93270a0) Reply frame received for 3
I0821 06:42:58.407982      10 log.go:172] (0x93270a0) (0xa2e9490) Create stream
I0821 06:42:58.408035      10 log.go:172] (0x93270a0) (0xa2e9490) Stream added, broadcasting: 5
I0821 06:42:58.409032      10 log.go:172] (0x93270a0) Reply frame received for 5
I0821 06:42:58.481652      10 log.go:172] (0x93270a0) Data frame received for 3
I0821 06:42:58.481791      10 log.go:172] (0xa2e9110) (3) Data frame handling
I0821 06:42:58.481927      10 log.go:172] (0xa2e9110) (3) Data frame sent
I0821 06:42:58.483615      10 log.go:172] (0x93270a0) Data frame received for 3
I0821 06:42:58.483795      10 log.go:172] (0xa2e9110) (3) Data frame handling
I0821 06:42:58.483926      10 log.go:172] (0x93270a0) Data frame received for 5
I0821 06:42:58.484048      10 log.go:172] (0xa2e9490) (5) Data frame handling
I0821 06:42:58.485150      10 log.go:172] (0x93270a0) Data frame received for 1
I0821 06:42:58.485262      10 log.go:172] (0x846ea10) (1) Data frame handling
I0821 06:42:58.485376      10 log.go:172] (0x846ea10) (1) Data frame sent
I0821 06:42:58.485525      10 log.go:172] (0x93270a0) (0x846ea10) Stream removed, broadcasting: 1
I0821 06:42:58.485675      10 log.go:172] (0x93270a0) Go away received
I0821 06:42:58.486114      10 log.go:172] (0x93270a0) (0x846ea10) Stream removed, broadcasting: 1
I0821 06:42:58.486247      10 log.go:172] (0x93270a0) (0xa2e9110) Stream removed, broadcasting: 3
I0821 06:42:58.486384      10 log.go:172] (0x93270a0) (0xa2e9490) Stream removed, broadcasting: 5
Aug 21 06:42:58.486: INFO: Deleting pod dns-9023...
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:42:58.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9023" for this suite.

• [SLOW TEST:6.630 seconds]
[sig-network] DNS
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":153,"skipped":2477,"failed":0}
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:42:58.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 21 06:42:59.167: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4bb9ec62-4bdc-4f88-89d1-bc05f618ec01" in namespace "projected-5197" to be "Succeeded or Failed"
Aug 21 06:42:59.324: INFO: Pod "downwardapi-volume-4bb9ec62-4bdc-4f88-89d1-bc05f618ec01": Phase="Pending", Reason="", readiness=false. Elapsed: 156.417097ms
Aug 21 06:43:01.331: INFO: Pod "downwardapi-volume-4bb9ec62-4bdc-4f88-89d1-bc05f618ec01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163643655s
Aug 21 06:43:03.338: INFO: Pod "downwardapi-volume-4bb9ec62-4bdc-4f88-89d1-bc05f618ec01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.17033078s
STEP: Saw pod success
Aug 21 06:43:03.338: INFO: Pod "downwardapi-volume-4bb9ec62-4bdc-4f88-89d1-bc05f618ec01" satisfied condition "Succeeded or Failed"
Aug 21 06:43:03.386: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-4bb9ec62-4bdc-4f88-89d1-bc05f618ec01 container client-container: 
STEP: delete the pod
Aug 21 06:43:03.410: INFO: Waiting for pod downwardapi-volume-4bb9ec62-4bdc-4f88-89d1-bc05f618ec01 to disappear
Aug 21 06:43:03.445: INFO: Pod downwardapi-volume-4bb9ec62-4bdc-4f88-89d1-bc05f618ec01 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:43:03.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5197" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":154,"skipped":2477,"failed":0}
SSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:43:03.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-1310
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-1310
I0821 06:43:03.697263      10 runners.go:190] Created replication controller with name: externalname-service, namespace: services-1310, replica count: 2
I0821 06:43:06.748955      10 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0821 06:43:09.749801      10 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 21 06:43:09.750: INFO: Creating new exec pod
Aug 21 06:43:14.789: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=services-1310 execpod8v4wr -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Aug 21 06:43:16.144: INFO: stderr: "I0821 06:43:16.024211    3125 log.go:172] (0x2b58310) (0x2b58380) Create stream\nI0821 06:43:16.026340    3125 log.go:172] (0x2b58310) (0x2b58380) Stream added, broadcasting: 1\nI0821 06:43:16.044230    3125 log.go:172] (0x2b58310) Reply frame received for 1\nI0821 06:43:16.044717    3125 log.go:172] (0x2b58310) (0x2fe4070) Create stream\nI0821 06:43:16.044834    3125 log.go:172] (0x2b58310) (0x2fe4070) Stream added, broadcasting: 3\nI0821 06:43:16.045829    3125 log.go:172] (0x2b58310) Reply frame received for 3\nI0821 06:43:16.046049    3125 log.go:172] (0x2b58310) (0x2fe42a0) Create stream\nI0821 06:43:16.046105    3125 log.go:172] (0x2b58310) (0x2fe42a0) Stream added, broadcasting: 5\nI0821 06:43:16.047255    3125 log.go:172] (0x2b58310) Reply frame received for 5\nI0821 06:43:16.116940    3125 log.go:172] (0x2b58310) Data frame received for 5\nI0821 06:43:16.117331    3125 log.go:172] (0x2b58310) Data frame received for 3\nI0821 06:43:16.117691    3125 log.go:172] (0x2fe4070) (3) Data frame handling\nI0821 06:43:16.117863    3125 log.go:172] (0x2fe42a0) (5) Data frame handling\nI0821 06:43:16.119145    3125 log.go:172] (0x2b58310) Data frame received for 1\nI0821 06:43:16.119300    3125 log.go:172] (0x2b58380) (1) Data frame handling\nI0821 06:43:16.120168    3125 log.go:172] (0x2b58380) (1) Data frame sent\nI0821 06:43:16.120455    3125 log.go:172] (0x2fe42a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0821 06:43:16.121450    3125 log.go:172] (0x2b58310) Data frame received for 5\nI0821 06:43:16.121588    3125 log.go:172] (0x2fe42a0) (5) Data frame handling\nI0821 06:43:16.121731    3125 log.go:172] (0x2fe42a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0821 06:43:16.121883    3125 log.go:172] (0x2b58310) Data frame received for 5\nI0821 06:43:16.122011    3125 log.go:172] (0x2fe42a0) (5) Data frame handling\nI0821 06:43:16.123165    3125 log.go:172] (0x2b58310) (0x2b58380) Stream removed, broadcasting: 1\nI0821 06:43:16.125234    3125 log.go:172] (0x2b58310) Go away received\nI0821 06:43:16.129547    3125 log.go:172] Streams opened: 2, map[spdy.StreamId]*spdystream.Stream{0x3:(*spdystream.Stream)(0x2fe4070), 0x5:(*spdystream.Stream)(0x2fe42a0)}\nI0821 06:43:16.129860    3125 log.go:172] (0x2b58310) (0x2b58380) Stream removed, broadcasting: 1\nI0821 06:43:16.130162    3125 log.go:172] (0x2b58310) (0x2fe4070) Stream removed, broadcasting: 3\nI0821 06:43:16.130609    3125 log.go:172] (0x2b58310) (0x2fe42a0) Stream removed, broadcasting: 5\n"
Aug 21 06:43:16.145: INFO: stdout: ""
Aug 21 06:43:16.149: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=services-1310 execpod8v4wr -- /bin/sh -x -c nc -zv -t -w 2 10.102.221.249 80'
Aug 21 06:43:17.579: INFO: stderr: "I0821 06:43:17.434516    3148 log.go:172] (0x2c44310) (0x2c44380) Create stream\nI0821 06:43:17.440551    3148 log.go:172] (0x2c44310) (0x2c44380) Stream added, broadcasting: 1\nI0821 06:43:17.458294    3148 log.go:172] (0x2c44310) Reply frame received for 1\nI0821 06:43:17.458839    3148 log.go:172] (0x2c44310) (0x2f16070) Create stream\nI0821 06:43:17.458916    3148 log.go:172] (0x2c44310) (0x2f16070) Stream added, broadcasting: 3\nI0821 06:43:17.460283    3148 log.go:172] (0x2c44310) Reply frame received for 3\nI0821 06:43:17.460549    3148 log.go:172] (0x2c44310) (0x2a9e070) Create stream\nI0821 06:43:17.460624    3148 log.go:172] (0x2c44310) (0x2a9e070) Stream added, broadcasting: 5\nI0821 06:43:17.461723    3148 log.go:172] (0x2c44310) Reply frame received for 5\nI0821 06:43:17.557915    3148 log.go:172] (0x2c44310) Data frame received for 5\nI0821 06:43:17.558246    3148 log.go:172] (0x2c44310) Data frame received for 3\nI0821 06:43:17.558431    3148 log.go:172] (0x2f16070) (3) Data frame handling\nI0821 06:43:17.558512    3148 log.go:172] (0x2c44310) Data frame received for 1\nI0821 06:43:17.558624    3148 log.go:172] (0x2c44380) (1) Data frame handling\nI0821 06:43:17.558736    3148 log.go:172] (0x2a9e070) (5) Data frame handling\n+ nc -zv -t -w 2 10.102.221.249 80\nConnection to 10.102.221.249 80 port [tcp/http] succeeded!\nI0821 06:43:17.561206    3148 log.go:172] (0x2a9e070) (5) Data frame sent\nI0821 06:43:17.561626    3148 log.go:172] (0x2c44310) Data frame received for 5\nI0821 06:43:17.561737    3148 log.go:172] (0x2a9e070) (5) Data frame handling\nI0821 06:43:17.561837    3148 log.go:172] (0x2c44380) (1) Data frame sent\nI0821 06:43:17.562854    3148 log.go:172] (0x2c44310) (0x2c44380) Stream removed, broadcasting: 1\nI0821 06:43:17.564017    3148 log.go:172] (0x2c44310) Go away received\nI0821 06:43:17.566423    3148 log.go:172] (0x2c44310) (0x2c44380) Stream removed, broadcasting: 1\nI0821 06:43:17.567066    3148 log.go:172] (0x2c44310) (0x2f16070) Stream removed, broadcasting: 3\nI0821 06:43:17.567292    3148 log.go:172] (0x2c44310) (0x2a9e070) Stream removed, broadcasting: 5\n"
Aug 21 06:43:17.580: INFO: stdout: ""
Aug 21 06:43:17.581: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=services-1310 execpod8v4wr -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.16 30362'
Aug 21 06:43:18.932: INFO: stderr: "I0821 06:43:18.811591    3173 log.go:172] (0x2b06070) (0x2b060e0) Create stream\nI0821 06:43:18.813747    3173 log.go:172] (0x2b06070) (0x2b060e0) Stream added, broadcasting: 1\nI0821 06:43:18.824087    3173 log.go:172] (0x2b06070) Reply frame received for 1\nI0821 06:43:18.824626    3173 log.go:172] (0x2b06070) (0x2b062a0) Create stream\nI0821 06:43:18.824697    3173 log.go:172] (0x2b06070) (0x2b062a0) Stream added, broadcasting: 3\nI0821 06:43:18.827352    3173 log.go:172] (0x2b06070) Reply frame received for 3\nI0821 06:43:18.828275    3173 log.go:172] (0x2b06070) (0x2cd6070) Create stream\nI0821 06:43:18.828471    3173 log.go:172] (0x2b06070) (0x2cd6070) Stream added, broadcasting: 5\nI0821 06:43:18.831227    3173 log.go:172] (0x2b06070) Reply frame received for 5\nI0821 06:43:18.911412    3173 log.go:172] (0x2b06070) Data frame received for 3\nI0821 06:43:18.911900    3173 log.go:172] (0x2b062a0) (3) Data frame handling\nI0821 06:43:18.913171    3173 log.go:172] (0x2b06070) Data frame received for 5\nI0821 06:43:18.913526    3173 log.go:172] (0x2cd6070) (5) Data frame handling\nI0821 06:43:18.913830    3173 log.go:172] (0x2b06070) Data frame received for 1\nI0821 06:43:18.913980    3173 log.go:172] (0x2b060e0) (1) Data frame handling\n+ nc -zv -t -w 2 172.18.0.16 30362\nConnection to 172.18.0.16 30362 port [tcp/30362] succeeded!\nI0821 06:43:18.914598    3173 log.go:172] (0x2cd6070) (5) Data frame sent\nI0821 06:43:18.914805    3173 log.go:172] (0x2b060e0) (1) Data frame sent\nI0821 06:43:18.914901    3173 log.go:172] (0x2b06070) Data frame received for 5\nI0821 06:43:18.914970    3173 log.go:172] (0x2cd6070) (5) Data frame handling\nI0821 06:43:18.915812    3173 log.go:172] (0x2b06070) (0x2b060e0) Stream removed, broadcasting: 1\nI0821 06:43:18.918211    3173 log.go:172] (0x2b06070) Go away received\nI0821 06:43:18.919542    3173 log.go:172] (0x2b06070) (0x2b060e0) Stream removed, broadcasting: 1\nI0821 06:43:18.919903    3173 log.go:172] (0x2b06070) (0x2b062a0) Stream removed, broadcasting: 3\nI0821 06:43:18.920152    3173 log.go:172] (0x2b06070) (0x2cd6070) Stream removed, broadcasting: 5\n"
Aug 21 06:43:18.933: INFO: stdout: ""
Aug 21 06:43:18.934: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=services-1310 execpod8v4wr -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 30362'
Aug 21 06:43:20.337: INFO: stderr: "I0821 06:43:20.206925    3197 log.go:172] (0x2c2c2a0) (0x2c2c310) Create stream\nI0821 06:43:20.209732    3197 log.go:172] (0x2c2c2a0) (0x2c2c310) Stream added, broadcasting: 1\nI0821 06:43:20.217486    3197 log.go:172] (0x2c2c2a0) Reply frame received for 1\nI0821 06:43:20.218569    3197 log.go:172] (0x2c2c2a0) (0x2c2c5b0) Create stream\nI0821 06:43:20.218688    3197 log.go:172] (0x2c2c2a0) (0x2c2c5b0) Stream added, broadcasting: 3\nI0821 06:43:20.221146    3197 log.go:172] (0x2c2c2a0) Reply frame received for 3\nI0821 06:43:20.221578    3197 log.go:172] (0x2c2c2a0) (0x28b89a0) Create stream\nI0821 06:43:20.221704    3197 log.go:172] (0x2c2c2a0) (0x28b89a0) Stream added, broadcasting: 5\nI0821 06:43:20.223183    3197 log.go:172] (0x2c2c2a0) Reply frame received for 5\nI0821 06:43:20.318149    3197 log.go:172] (0x2c2c2a0) Data frame received for 5\nI0821 06:43:20.319320    3197 log.go:172] (0x2c2c2a0) Data frame received for 1\nI0821 06:43:20.319595    3197 log.go:172] (0x2c2c2a0) Data frame received for 3\nI0821 06:43:20.319908    3197 log.go:172] (0x28b89a0) (5) Data frame handling\nI0821 06:43:20.320296    3197 log.go:172] (0x2c2c5b0) (3) Data frame handling\nI0821 06:43:20.320476    3197 log.go:172] (0x2c2c310) (1) Data frame handling\nI0821 06:43:20.320901    3197 log.go:172] (0x2c2c310) (1) Data frame sent\nI0821 06:43:20.321956    3197 log.go:172] (0x28b89a0) (5) Data frame sent\nI0821 06:43:20.322420    3197 log.go:172] (0x2c2c2a0) Data frame received for 5\nI0821 06:43:20.322558    3197 log.go:172] (0x28b89a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 30362\nConnection to 172.18.0.13 30362 port [tcp/30362] succeeded!\nI0821 06:43:20.324366    3197 log.go:172] (0x2c2c2a0) (0x2c2c310) Stream removed, broadcasting: 1\nI0821 06:43:20.325474    3197 log.go:172] (0x2c2c2a0) Go away received\nI0821 06:43:20.327708    3197 log.go:172] (0x2c2c2a0) (0x2c2c310) Stream removed, broadcasting: 1\nI0821 06:43:20.327939    3197 log.go:172] (0x2c2c2a0) (0x2c2c5b0) Stream removed, broadcasting: 3\nI0821 06:43:20.328116    3197 log.go:172] (0x2c2c2a0) (0x28b89a0) Stream removed, broadcasting: 5\n"
Aug 21 06:43:20.338: INFO: stdout: ""
Aug 21 06:43:20.338: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:43:20.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1310" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:16.940 seconds]
[sig-network] Services
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":155,"skipped":2481,"failed":0}
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:43:20.396: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Aug 21 06:43:20.521: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 21 06:43:20.550: INFO: Waiting for terminating namespaces to be deleted...
Aug 21 06:43:20.556: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Aug 21 06:43:20.573: INFO: kindnet-kkxd5 from kube-system started at 2020-08-15 09:40:28 +0000 UTC (1 container statuses recorded)
Aug 21 06:43:20.573: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 21 06:43:20.573: INFO: kube-proxy-vn4t5 from kube-system started at 2020-08-15 09:40:28 +0000 UTC (1 container statuses recorded)
Aug 21 06:43:20.573: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 21 06:43:20.573: INFO: externalname-service-mskm9 from services-1310 started at 2020-08-21 06:43:03 +0000 UTC (1 container statuses recorded)
Aug 21 06:43:20.573: INFO: 	Container externalname-service ready: true, restart count 0
Aug 21 06:43:20.573: INFO: execpod8v4wr from services-1310 started at 2020-08-21 06:43:09 +0000 UTC (1 container statuses recorded)
Aug 21 06:43:20.573: INFO: 	Container agnhost-pause ready: true, restart count 0
Aug 21 06:43:20.574: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Aug 21 06:43:20.583: INFO: kindnet-qzfqb from kube-system started at 2020-08-15 09:40:30 +0000 UTC (1 container statuses recorded)
Aug 21 06:43:20.583: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 21 06:43:20.583: INFO: externalname-service-dsb4p from services-1310 started at 2020-08-21 06:43:03 +0000 UTC (1 container statuses recorded)
Aug 21 06:43:20.583: INFO: 	Container externalname-service ready: true, restart count 0
Aug 21 06:43:20.583: INFO: kube-proxy-c52ll from kube-system started at 2020-08-15 09:40:30 +0000 UTC (1 container statuses recorded)
Aug 21 06:43:20.583: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: verifying the node has the label node kali-worker
STEP: verifying the node has the label node kali-worker2
Aug 21 06:43:21.657: INFO: Pod kindnet-kkxd5 requesting resource cpu=100m on Node kali-worker
Aug 21 06:43:21.657: INFO: Pod kindnet-qzfqb requesting resource cpu=100m on Node kali-worker2
Aug 21 06:43:21.658: INFO: Pod kube-proxy-c52ll requesting resource cpu=0m on Node kali-worker2
Aug 21 06:43:21.658: INFO: Pod kube-proxy-vn4t5 requesting resource cpu=0m on Node kali-worker
Aug 21 06:43:21.658: INFO: Pod execpod8v4wr requesting resource cpu=0m on Node kali-worker
Aug 21 06:43:21.658: INFO: Pod externalname-service-dsb4p requesting resource cpu=0m on Node kali-worker2
Aug 21 06:43:21.658: INFO: Pod externalname-service-mskm9 requesting resource cpu=0m on Node kali-worker
STEP: Starting Pods to consume most of the cluster CPU.
Aug 21 06:43:21.658: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker
Aug 21 06:43:21.671: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker2
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6983eae5-3de3-432f-b50a-6e98b0bb41c0.162d3570e3943ef3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1214/filler-pod-6983eae5-3de3-432f-b50a-6e98b0bb41c0 to kali-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6983eae5-3de3-432f-b50a-6e98b0bb41c0.162d357146318bf1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6983eae5-3de3-432f-b50a-6e98b0bb41c0.162d35719baaaedc], Reason = [Created], Message = [Created container filler-pod-6983eae5-3de3-432f-b50a-6e98b0bb41c0]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6983eae5-3de3-432f-b50a-6e98b0bb41c0.162d3571a8f29129], Reason = [Started], Message = [Started container filler-pod-6983eae5-3de3-432f-b50a-6e98b0bb41c0]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-e9bfc3a2-4320-416e-bc39-6b0459e142db.162d3570df94b2fb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1214/filler-pod-e9bfc3a2-4320-416e-bc39-6b0459e142db to kali-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-e9bfc3a2-4320-416e-bc39-6b0459e142db.162d357133f8dee4], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-e9bfc3a2-4320-416e-bc39-6b0459e142db.162d3571902afcce], Reason = [Created], Message = [Created container filler-pod-e9bfc3a2-4320-416e-bc39-6b0459e142db]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-e9bfc3a2-4320-416e-bc39-6b0459e142db.162d3571a3ddd8dd], Reason = [Started], Message = [Started container filler-pod-e9bfc3a2-4320-416e-bc39-6b0459e142db]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.162d3571d4e08844], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.162d3571d7607bc2], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node kali-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node kali-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:43:26.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1214" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:6.588 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":275,"completed":156,"skipped":2488,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:43:26.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 06:43:27.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:43:31.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7419" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":157,"skipped":2571,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:43:31.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Aug 21 06:43:31.313: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 21 06:43:31.348: INFO: Waiting for terminating namespaces to be deleted...
Aug 21 06:43:31.353: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Aug 21 06:43:31.365: INFO: kindnet-kkxd5 from kube-system started at 2020-08-15 09:40:28 +0000 UTC (1 container statuses recorded)
Aug 21 06:43:31.365: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 21 06:43:31.365: INFO: kube-proxy-vn4t5 from kube-system started at 2020-08-15 09:40:28 +0000 UTC (1 container statuses recorded)
Aug 21 06:43:31.365: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 21 06:43:31.366: INFO: filler-pod-e9bfc3a2-4320-416e-bc39-6b0459e142db from sched-pred-1214 started at 2020-08-21 06:43:21 +0000 UTC (1 container statuses recorded)
Aug 21 06:43:31.366: INFO: 	Container filler-pod-e9bfc3a2-4320-416e-bc39-6b0459e142db ready: true, restart count 0
Aug 21 06:43:31.366: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Aug 21 06:43:31.380: INFO: kindnet-qzfqb from kube-system started at 2020-08-15 09:40:30 +0000 UTC (1 container statuses recorded)
Aug 21 06:43:31.380: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 21 06:43:31.380: INFO: kube-proxy-c52ll from kube-system started at 2020-08-15 09:40:30 +0000 UTC (1 container statuses recorded)
Aug 21 06:43:31.380: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 21 06:43:31.380: INFO: pod-logs-websocket-3d017f3c-09b9-409d-ac24-2726f0fb80f9 from pods-7419 started at 2020-08-21 06:43:27 +0000 UTC (1 container statuses recorded)
Aug 21 06:43:31.380: INFO: 	Container main ready: true, restart count 0
Aug 21 06:43:31.380: INFO: filler-pod-6983eae5-3de3-432f-b50a-6e98b0bb41c0 from sched-pred-1214 started at 2020-08-21 06:43:21 +0000 UTC (1 container statuses recorded)
Aug 21 06:43:31.380: INFO: 	Container filler-pod-6983eae5-3de3-432f-b50a-6e98b0bb41c0 ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-1bec3ad0-5d04-45e1-9217-3bec265c0e69 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-1bec3ad0-5d04-45e1-9217-3bec265c0e69 off the node kali-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-1bec3ad0-5d04-45e1-9217-3bec265c0e69
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:43:41.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-392" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:10.387 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":275,"completed":158,"skipped":2606,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:43:41.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-7755
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 21 06:43:41.730: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Aug 21 06:43:41.867: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 21 06:43:43.874: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 21 06:43:45.874: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 21 06:43:47.874: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 21 06:43:49.875: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 21 06:43:51.874: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 21 06:43:53.875: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 21 06:43:55.874: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 21 06:43:57.874: INFO: The status of Pod netserver-0 is Running (Ready = true)
Aug 21 06:43:57.882: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug 21 06:43:59.890: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Aug 21 06:44:03.949: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.213:8080/dial?request=hostname&protocol=http&host=10.244.2.212&port=8080&tries=1'] Namespace:pod-network-test-7755 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 06:44:03.950: INFO: >>> kubeConfig: /root/.kube/config
I0821 06:44:04.058128      10 log.go:172] (0xaba0930) (0xaba0a10) Create stream
I0821 06:44:04.058272      10 log.go:172] (0xaba0930) (0xaba0a10) Stream added, broadcasting: 1
I0821 06:44:04.062604      10 log.go:172] (0xaba0930) Reply frame received for 1
I0821 06:44:04.062870      10 log.go:172] (0xaba0930) (0xaf047e0) Create stream
I0821 06:44:04.062995      10 log.go:172] (0xaba0930) (0xaf047e0) Stream added, broadcasting: 3
I0821 06:44:04.064647      10 log.go:172] (0xaba0930) Reply frame received for 3
I0821 06:44:04.064850      10 log.go:172] (0xaba0930) (0xaf04b60) Create stream
I0821 06:44:04.064931      10 log.go:172] (0xaba0930) (0xaf04b60) Stream added, broadcasting: 5
I0821 06:44:04.066724      10 log.go:172] (0xaba0930) Reply frame received for 5
I0821 06:44:04.170842      10 log.go:172] (0xaba0930) Data frame received for 3
I0821 06:44:04.171040      10 log.go:172] (0xaf047e0) (3) Data frame handling
I0821 06:44:04.171202      10 log.go:172] (0xaf047e0) (3) Data frame sent
I0821 06:44:04.171498      10 log.go:172] (0xaba0930) Data frame received for 5
I0821 06:44:04.171581      10 log.go:172] (0xaf04b60) (5) Data frame handling
I0821 06:44:04.171749      10 log.go:172] (0xaba0930) Data frame received for 3
I0821 06:44:04.171875      10 log.go:172] (0xaf047e0) (3) Data frame handling
I0821 06:44:04.173343      10 log.go:172] (0xaba0930) Data frame received for 1
I0821 06:44:04.173419      10 log.go:172] (0xaba0a10) (1) Data frame handling
I0821 06:44:04.173535      10 log.go:172] (0xaba0a10) (1) Data frame sent
I0821 06:44:04.173657      10 log.go:172] (0xaba0930) (0xaba0a10) Stream removed, broadcasting: 1
I0821 06:44:04.173860      10 log.go:172] (0xaba0930) Go away received
I0821 06:44:04.174248      10 log.go:172] (0xaba0930) (0xaba0a10) Stream removed, broadcasting: 1
I0821 06:44:04.174370      10 log.go:172] (0xaba0930) (0xaf047e0) Stream removed, broadcasting: 3
I0821 06:44:04.174468      10 log.go:172] (0xaba0930) (0xaf04b60) Stream removed, broadcasting: 5
Aug 21 06:44:04.175: INFO: Waiting for responses: map[]
Aug 21 06:44:04.180: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.213:8080/dial?request=hostname&protocol=http&host=10.244.1.228&port=8080&tries=1'] Namespace:pod-network-test-7755 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 06:44:04.181: INFO: >>> kubeConfig: /root/.kube/config
I0821 06:44:04.288184      10 log.go:172] (0xaf05490) (0xaf05570) Create stream
I0821 06:44:04.288339      10 log.go:172] (0xaf05490) (0xaf05570) Stream added, broadcasting: 1
I0821 06:44:04.292823      10 log.go:172] (0xaf05490) Reply frame received for 1
I0821 06:44:04.292996      10 log.go:172] (0xaf05490) (0x7898700) Create stream
I0821 06:44:04.293086      10 log.go:172] (0xaf05490) (0x7898700) Stream added, broadcasting: 3
I0821 06:44:04.294591      10 log.go:172] (0xaf05490) Reply frame received for 3
I0821 06:44:04.294790      10 log.go:172] (0xaf05490) (0x78990a0) Create stream
I0821 06:44:04.294887      10 log.go:172] (0xaf05490) (0x78990a0) Stream added, broadcasting: 5
I0821 06:44:04.296523      10 log.go:172] (0xaf05490) Reply frame received for 5
I0821 06:44:04.367162      10 log.go:172] (0xaf05490) Data frame received for 3
I0821 06:44:04.367314      10 log.go:172] (0x7898700) (3) Data frame handling
I0821 06:44:04.367430      10 log.go:172] (0x7898700) (3) Data frame sent
I0821 06:44:04.367556      10 log.go:172] (0xaf05490) Data frame received for 5
I0821 06:44:04.367760      10 log.go:172] (0x78990a0) (5) Data frame handling
I0821 06:44:04.367893      10 log.go:172] (0xaf05490) Data frame received for 3
I0821 06:44:04.368016      10 log.go:172] (0x7898700) (3) Data frame handling
I0821 06:44:04.369119      10 log.go:172] (0xaf05490) Data frame received for 1
I0821 06:44:04.369278      10 log.go:172] (0xaf05570) (1) Data frame handling
I0821 06:44:04.369463      10 log.go:172] (0xaf05570) (1) Data frame sent
I0821 06:44:04.369626      10 log.go:172] (0xaf05490) (0xaf05570) Stream removed, broadcasting: 1
I0821 06:44:04.369774      10 log.go:172] (0xaf05490) Go away received
I0821 06:44:04.370042      10 log.go:172] (0xaf05490) (0xaf05570) Stream removed, broadcasting: 1
I0821 06:44:04.370166      10 log.go:172] (0xaf05490) (0x7898700) Stream removed, broadcasting: 3
I0821 06:44:04.370261      10 log.go:172] (0xaf05490) (0x78990a0) Stream removed, broadcasting: 5
Aug 21 06:44:04.370: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:44:04.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7755" for this suite.

• [SLOW TEST:22.773 seconds]
[sig-network] Networking
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":159,"skipped":2661,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:44:04.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Aug 21 06:44:04.493: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:44:11.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7264" for this suite.

• [SLOW TEST:7.210 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":160,"skipped":2685,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:44:11.613: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 21 06:44:16.029: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:44:16.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3616" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":161,"skipped":2706,"failed":0}
S
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:44:16.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:44:20.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-1093" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":162,"skipped":2707,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:44:20.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting the proxy server
Aug 21 06:44:20.448: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:44:21.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3309" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":275,"completed":163,"skipped":2753,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:44:21.551: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-map-d530b7b7-6699-4879-9182-725dd4475cd2
STEP: Creating a pod to test consume secrets
Aug 21 06:44:21.702: INFO: Waiting up to 5m0s for pod "pod-secrets-d0539f27-5807-4fd9-88d2-0833be0a6c7f" in namespace "secrets-403" to be "Succeeded or Failed"
Aug 21 06:44:21.726: INFO: Pod "pod-secrets-d0539f27-5807-4fd9-88d2-0833be0a6c7f": Phase="Pending", Reason="", readiness=false. Elapsed: 23.399959ms
Aug 21 06:44:23.801: INFO: Pod "pod-secrets-d0539f27-5807-4fd9-88d2-0833be0a6c7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098395398s
Aug 21 06:44:25.808: INFO: Pod "pod-secrets-d0539f27-5807-4fd9-88d2-0833be0a6c7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.10604098s
STEP: Saw pod success
Aug 21 06:44:25.809: INFO: Pod "pod-secrets-d0539f27-5807-4fd9-88d2-0833be0a6c7f" satisfied condition "Succeeded or Failed"
Aug 21 06:44:25.852: INFO: Trying to get logs from node kali-worker pod pod-secrets-d0539f27-5807-4fd9-88d2-0833be0a6c7f container secret-volume-test: 
STEP: delete the pod
Aug 21 06:44:25.989: INFO: Waiting for pod pod-secrets-d0539f27-5807-4fd9-88d2-0833be0a6c7f to disappear
Aug 21 06:44:25.994: INFO: Pod pod-secrets-d0539f27-5807-4fd9-88d2-0833be0a6c7f no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:44:25.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-403" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":164,"skipped":2765,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:44:26.008: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-8322, will wait for the garbage collector to delete the pods
Aug 21 06:44:32.174: INFO: Deleting Job.batch foo took: 8.014417ms
Aug 21 06:44:32.474: INFO: Terminating Job.batch foo pods took: 300.911553ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:45:09.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-8322" for this suite.

• [SLOW TEST:43.206 seconds]
[sig-apps] Job
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":165,"skipped":2775,"failed":0}
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:45:09.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-secret-cmkj
STEP: Creating a pod to test atomic-volume-subpath
Aug 21 06:45:09.302: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-cmkj" in namespace "subpath-5066" to be "Succeeded or Failed"
Aug 21 06:45:09.343: INFO: Pod "pod-subpath-test-secret-cmkj": Phase="Pending", Reason="", readiness=false. Elapsed: 41.250923ms
Aug 21 06:45:11.472: INFO: Pod "pod-subpath-test-secret-cmkj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170652923s
Aug 21 06:45:13.479: INFO: Pod "pod-subpath-test-secret-cmkj": Phase="Running", Reason="", readiness=true. Elapsed: 4.177210188s
Aug 21 06:45:15.485: INFO: Pod "pod-subpath-test-secret-cmkj": Phase="Running", Reason="", readiness=true. Elapsed: 6.183127022s
Aug 21 06:45:17.507: INFO: Pod "pod-subpath-test-secret-cmkj": Phase="Running", Reason="", readiness=true. Elapsed: 8.20480771s
Aug 21 06:45:19.514: INFO: Pod "pod-subpath-test-secret-cmkj": Phase="Running", Reason="", readiness=true. Elapsed: 10.212169613s
Aug 21 06:45:21.522: INFO: Pod "pod-subpath-test-secret-cmkj": Phase="Running", Reason="", readiness=true. Elapsed: 12.219955641s
Aug 21 06:45:23.530: INFO: Pod "pod-subpath-test-secret-cmkj": Phase="Running", Reason="", readiness=true. Elapsed: 14.228326209s
Aug 21 06:45:25.537: INFO: Pod "pod-subpath-test-secret-cmkj": Phase="Running", Reason="", readiness=true. Elapsed: 16.235158173s
Aug 21 06:45:27.544: INFO: Pod "pod-subpath-test-secret-cmkj": Phase="Running", Reason="", readiness=true. Elapsed: 18.24270446s
Aug 21 06:45:29.551: INFO: Pod "pod-subpath-test-secret-cmkj": Phase="Running", Reason="", readiness=true. Elapsed: 20.249127823s
Aug 21 06:45:31.559: INFO: Pod "pod-subpath-test-secret-cmkj": Phase="Running", Reason="", readiness=true. Elapsed: 22.256827551s
Aug 21 06:45:33.566: INFO: Pod "pod-subpath-test-secret-cmkj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.264133899s
STEP: Saw pod success
Aug 21 06:45:33.566: INFO: Pod "pod-subpath-test-secret-cmkj" satisfied condition "Succeeded or Failed"
Aug 21 06:45:33.572: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-secret-cmkj container test-container-subpath-secret-cmkj: 
STEP: delete the pod
Aug 21 06:45:33.653: INFO: Waiting for pod pod-subpath-test-secret-cmkj to disappear
Aug 21 06:45:33.666: INFO: Pod pod-subpath-test-secret-cmkj no longer exists
STEP: Deleting pod pod-subpath-test-secret-cmkj
Aug 21 06:45:33.667: INFO: Deleting pod "pod-subpath-test-secret-cmkj" in namespace "subpath-5066"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:45:33.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5066" for this suite.

• [SLOW TEST:24.468 seconds]
[sig-storage] Subpath
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":166,"skipped":2777,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:45:33.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Aug 21 06:45:34.127: INFO: Pod name pod-release: Found 0 pods out of 1
Aug 21 06:45:39.134: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:45:40.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4960" for this suite.

• [SLOW TEST:6.489 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":167,"skipped":2886,"failed":0}
SSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:45:40.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: getting the auto-created API token
Aug 21 06:45:41.006: INFO: created pod pod-service-account-defaultsa
Aug 21 06:45:41.006: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Aug 21 06:45:41.192: INFO: created pod pod-service-account-mountsa
Aug 21 06:45:41.193: INFO: pod pod-service-account-mountsa service account token volume mount: true
Aug 21 06:45:41.205: INFO: created pod pod-service-account-nomountsa
Aug 21 06:45:41.205: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Aug 21 06:45:41.397: INFO: created pod pod-service-account-defaultsa-mountspec
Aug 21 06:45:41.397: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Aug 21 06:45:41.454: INFO: created pod pod-service-account-mountsa-mountspec
Aug 21 06:45:41.454: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Aug 21 06:45:41.596: INFO: created pod pod-service-account-nomountsa-mountspec
Aug 21 06:45:41.596: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Aug 21 06:45:41.605: INFO: created pod pod-service-account-defaultsa-nomountspec
Aug 21 06:45:41.605: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Aug 21 06:45:41.887: INFO: created pod pod-service-account-mountsa-nomountspec
Aug 21 06:45:41.887: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Aug 21 06:45:41.923: INFO: created pod pod-service-account-nomountsa-nomountspec
Aug 21 06:45:41.923: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:45:41.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-4631" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":275,"completed":168,"skipped":2889,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:45:42.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0821 06:45:46.749031      10 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 21 06:45:46.749: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:45:46.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7592" for this suite.

• [SLOW TEST:5.215 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":169,"skipped":2944,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:45:47.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-a00329af-a018-48bd-87ab-f520f356aec2
STEP: Creating a pod to test consume secrets
Aug 21 06:45:50.252: INFO: Waiting up to 5m0s for pod "pod-secrets-b63751e3-e62a-4003-ab4d-7e61985aa30d" in namespace "secrets-8359" to be "Succeeded or Failed"
Aug 21 06:45:50.749: INFO: Pod "pod-secrets-b63751e3-e62a-4003-ab4d-7e61985aa30d": Phase="Pending", Reason="", readiness=false. Elapsed: 496.289841ms
Aug 21 06:45:53.115: INFO: Pod "pod-secrets-b63751e3-e62a-4003-ab4d-7e61985aa30d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.862419935s
Aug 21 06:45:55.456: INFO: Pod "pod-secrets-b63751e3-e62a-4003-ab4d-7e61985aa30d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.203785609s
Aug 21 06:45:57.738: INFO: Pod "pod-secrets-b63751e3-e62a-4003-ab4d-7e61985aa30d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.485614383s
Aug 21 06:45:59.764: INFO: Pod "pod-secrets-b63751e3-e62a-4003-ab4d-7e61985aa30d": Phase="Running", Reason="", readiness=true. Elapsed: 9.511367926s
Aug 21 06:46:01.772: INFO: Pod "pod-secrets-b63751e3-e62a-4003-ab4d-7e61985aa30d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.519508728s
STEP: Saw pod success
Aug 21 06:46:01.772: INFO: Pod "pod-secrets-b63751e3-e62a-4003-ab4d-7e61985aa30d" satisfied condition "Succeeded or Failed"
Aug 21 06:46:01.776: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-b63751e3-e62a-4003-ab4d-7e61985aa30d container secret-volume-test: 
STEP: delete the pod
Aug 21 06:46:01.858: INFO: Waiting for pod pod-secrets-b63751e3-e62a-4003-ab4d-7e61985aa30d to disappear
Aug 21 06:46:01.863: INFO: Pod pod-secrets-b63751e3-e62a-4003-ab4d-7e61985aa30d no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:46:01.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8359" for this suite.
STEP: Destroying namespace "secret-namespace-3363" for this suite.

• [SLOW TEST:14.365 seconds]
[sig-storage] Secrets
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":170,"skipped":2972,"failed":0}
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:46:01.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test substitution in container's command
Aug 21 06:46:01.994: INFO: Waiting up to 5m0s for pod "var-expansion-9a9687a9-c787-459f-b89e-690c76be7f4f" in namespace "var-expansion-7194" to be "Succeeded or Failed"
Aug 21 06:46:02.001: INFO: Pod "var-expansion-9a9687a9-c787-459f-b89e-690c76be7f4f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.774768ms
Aug 21 06:46:04.078: INFO: Pod "var-expansion-9a9687a9-c787-459f-b89e-690c76be7f4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08386724s
Aug 21 06:46:06.085: INFO: Pod "var-expansion-9a9687a9-c787-459f-b89e-690c76be7f4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090769171s
STEP: Saw pod success
Aug 21 06:46:06.085: INFO: Pod "var-expansion-9a9687a9-c787-459f-b89e-690c76be7f4f" satisfied condition "Succeeded or Failed"
Aug 21 06:46:06.090: INFO: Trying to get logs from node kali-worker2 pod var-expansion-9a9687a9-c787-459f-b89e-690c76be7f4f container dapi-container: 
STEP: delete the pod
Aug 21 06:46:06.151: INFO: Waiting for pod var-expansion-9a9687a9-c787-459f-b89e-690c76be7f4f to disappear
Aug 21 06:46:06.159: INFO: Pod var-expansion-9a9687a9-c787-459f-b89e-690c76be7f4f no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:46:06.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7194" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":171,"skipped":2974,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:46:06.177: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-102a8a5a-2cef-4349-86ce-127e79526f0b in namespace container-probe-824
Aug 21 06:46:10.297: INFO: Started pod liveness-102a8a5a-2cef-4349-86ce-127e79526f0b in namespace container-probe-824
STEP: checking the pod's current state and verifying that restartCount is present
Aug 21 06:46:10.301: INFO: Initial restart count of pod liveness-102a8a5a-2cef-4349-86ce-127e79526f0b is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:50:11.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-824" for this suite.

• [SLOW TEST:245.320 seconds]
[k8s.io] Probing container
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":172,"skipped":2999,"failed":0}
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:50:11.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod busybox-932878f4-182f-48e6-b5ff-883b67ea5a2b in namespace container-probe-4117
Aug 21 06:50:15.798: INFO: Started pod busybox-932878f4-182f-48e6-b5ff-883b67ea5a2b in namespace container-probe-4117
STEP: checking the pod's current state and verifying that restartCount is present
Aug 21 06:50:15.804: INFO: Initial restart count of pod busybox-932878f4-182f-48e6-b5ff-883b67ea5a2b is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:54:16.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4117" for this suite.

• [SLOW TEST:245.248 seconds]
[k8s.io] Probing container
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":173,"skipped":2999,"failed":0}
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:54:16.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-f02e8af4-8ff1-432c-9fc8-17a508e320e6
STEP: Creating a pod to test consume secrets
Aug 21 06:54:16.942: INFO: Waiting up to 5m0s for pod "pod-secrets-d30c034b-c37b-4e97-9e0b-c55e0773541c" in namespace "secrets-7059" to be "Succeeded or Failed"
Aug 21 06:54:16.991: INFO: Pod "pod-secrets-d30c034b-c37b-4e97-9e0b-c55e0773541c": Phase="Pending", Reason="", readiness=false. Elapsed: 48.420601ms
Aug 21 06:54:19.022: INFO: Pod "pod-secrets-d30c034b-c37b-4e97-9e0b-c55e0773541c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07966123s
Aug 21 06:54:21.029: INFO: Pod "pod-secrets-d30c034b-c37b-4e97-9e0b-c55e0773541c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086997794s
STEP: Saw pod success
Aug 21 06:54:21.030: INFO: Pod "pod-secrets-d30c034b-c37b-4e97-9e0b-c55e0773541c" satisfied condition "Succeeded or Failed"
Aug 21 06:54:21.035: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-d30c034b-c37b-4e97-9e0b-c55e0773541c container secret-volume-test: 
STEP: delete the pod
Aug 21 06:54:21.090: INFO: Waiting for pod pod-secrets-d30c034b-c37b-4e97-9e0b-c55e0773541c to disappear
Aug 21 06:54:21.095: INFO: Pod pod-secrets-d30c034b-c37b-4e97-9e0b-c55e0773541c no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:54:21.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7059" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":174,"skipped":3000,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:54:21.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-35ca6bb9-7461-4dc3-9d2a-e7bde1ff9cad
STEP: Creating a pod to test consume configMaps
Aug 21 06:54:21.202: INFO: Waiting up to 5m0s for pod "pod-configmaps-bb28a83d-f0a1-47ae-a961-f2ebbe9f3c6e" in namespace "configmap-6341" to be "Succeeded or Failed"
Aug 21 06:54:21.219: INFO: Pod "pod-configmaps-bb28a83d-f0a1-47ae-a961-f2ebbe9f3c6e": Phase="Pending", Reason="", readiness=false. Elapsed: 17.185203ms
Aug 21 06:54:23.227: INFO: Pod "pod-configmaps-bb28a83d-f0a1-47ae-a961-f2ebbe9f3c6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024814578s
Aug 21 06:54:25.234: INFO: Pod "pod-configmaps-bb28a83d-f0a1-47ae-a961-f2ebbe9f3c6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031626606s
STEP: Saw pod success
Aug 21 06:54:25.234: INFO: Pod "pod-configmaps-bb28a83d-f0a1-47ae-a961-f2ebbe9f3c6e" satisfied condition "Succeeded or Failed"
Aug 21 06:54:25.239: INFO: Trying to get logs from node kali-worker pod pod-configmaps-bb28a83d-f0a1-47ae-a961-f2ebbe9f3c6e container configmap-volume-test: 
STEP: delete the pod
Aug 21 06:54:25.403: INFO: Waiting for pod pod-configmaps-bb28a83d-f0a1-47ae-a961-f2ebbe9f3c6e to disappear
Aug 21 06:54:25.503: INFO: Pod pod-configmaps-bb28a83d-f0a1-47ae-a961-f2ebbe9f3c6e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:54:25.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6341" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":175,"skipped":3018,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:54:25.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Aug 21 06:54:35.738: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6933 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 06:54:35.738: INFO: >>> kubeConfig: /root/.kube/config
I0821 06:54:35.844904      10 log.go:172] (0x98c6310) (0x98c65b0) Create stream
I0821 06:54:35.845089      10 log.go:172] (0x98c6310) (0x98c65b0) Stream added, broadcasting: 1
I0821 06:54:35.850005      10 log.go:172] (0x98c6310) Reply frame received for 1
I0821 06:54:35.850287      10 log.go:172] (0x98c6310) (0x98c6d90) Create stream
I0821 06:54:35.850418      10 log.go:172] (0x98c6310) (0x98c6d90) Stream added, broadcasting: 3
I0821 06:54:35.852588      10 log.go:172] (0x98c6310) Reply frame received for 3
I0821 06:54:35.852951      10 log.go:172] (0x98c6310) (0x7e65110) Create stream
I0821 06:54:35.853088      10 log.go:172] (0x98c6310) (0x7e65110) Stream added, broadcasting: 5
I0821 06:54:35.854755      10 log.go:172] (0x98c6310) Reply frame received for 5
I0821 06:54:35.942737      10 log.go:172] (0x98c6310) Data frame received for 5
I0821 06:54:35.942938      10 log.go:172] (0x7e65110) (5) Data frame handling
I0821 06:54:35.943100      10 log.go:172] (0x98c6310) Data frame received for 3
I0821 06:54:35.943239      10 log.go:172] (0x98c6d90) (3) Data frame handling
I0821 06:54:35.943373      10 log.go:172] (0x98c6d90) (3) Data frame sent
I0821 06:54:35.943489      10 log.go:172] (0x98c6310) Data frame received for 3
I0821 06:54:35.943610      10 log.go:172] (0x98c6d90) (3) Data frame handling
I0821 06:54:35.944115      10 log.go:172] (0x98c6310) Data frame received for 1
I0821 06:54:35.944365      10 log.go:172] (0x98c65b0) (1) Data frame handling
I0821 06:54:35.944564      10 log.go:172] (0x98c65b0) (1) Data frame sent
I0821 06:54:35.944885      10 log.go:172] (0x98c6310) (0x98c65b0) Stream removed, broadcasting: 1
I0821 06:54:35.945112      10 log.go:172] (0x98c6310) Go away received
I0821 06:54:35.945684      10 log.go:172] (0x98c6310) (0x98c65b0) Stream removed, broadcasting: 1
I0821 06:54:35.945908      10 log.go:172] (0x98c6310) (0x98c6d90) Stream removed, broadcasting: 3
I0821 06:54:35.946087      10 log.go:172] (0x98c6310) (0x7e65110) Stream removed, broadcasting: 5
Aug 21 06:54:35.946: INFO: Exec stderr: ""
Aug 21 06:54:35.946: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6933 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 06:54:35.946: INFO: >>> kubeConfig: /root/.kube/config
I0821 06:54:36.054718      10 log.go:172] (0x8dcf5e0) (0x88641c0) Create stream
I0821 06:54:36.055006      10 log.go:172] (0x8dcf5e0) (0x88641c0) Stream added, broadcasting: 1
I0821 06:54:36.059238      10 log.go:172] (0x8dcf5e0) Reply frame received for 1
I0821 06:54:36.059476      10 log.go:172] (0x8dcf5e0) (0xa2e99d0) Create stream
I0821 06:54:36.059601      10 log.go:172] (0x8dcf5e0) (0xa2e99d0) Stream added, broadcasting: 3
I0821 06:54:36.061376      10 log.go:172] (0x8dcf5e0) Reply frame received for 3
I0821 06:54:36.061516      10 log.go:172] (0x8dcf5e0) (0x9f79880) Create stream
I0821 06:54:36.061603      10 log.go:172] (0x8dcf5e0) (0x9f79880) Stream added, broadcasting: 5
I0821 06:54:36.062834      10 log.go:172] (0x8dcf5e0) Reply frame received for 5
I0821 06:54:36.132710      10 log.go:172] (0x8dcf5e0) Data frame received for 5
I0821 06:54:36.132988      10 log.go:172] (0x9f79880) (5) Data frame handling
I0821 06:54:36.133088      10 log.go:172] (0x8dcf5e0) Data frame received for 3
I0821 06:54:36.133227      10 log.go:172] (0xa2e99d0) (3) Data frame handling
I0821 06:54:36.133333      10 log.go:172] (0xa2e99d0) (3) Data frame sent
I0821 06:54:36.133414      10 log.go:172] (0x8dcf5e0) Data frame received for 3
I0821 06:54:36.133499      10 log.go:172] (0xa2e99d0) (3) Data frame handling
I0821 06:54:36.134206      10 log.go:172] (0x8dcf5e0) Data frame received for 1
I0821 06:54:36.134343      10 log.go:172] (0x88641c0) (1) Data frame handling
I0821 06:54:36.134449      10 log.go:172] (0x88641c0) (1) Data frame sent
I0821 06:54:36.134606      10 log.go:172] (0x8dcf5e0) (0x88641c0) Stream removed, broadcasting: 1
I0821 06:54:36.134776      10 log.go:172] (0x8dcf5e0) Go away received
I0821 06:54:36.135277      10 log.go:172] (0x8dcf5e0) (0x88641c0) Stream removed, broadcasting: 1
I0821 06:54:36.135480      10 log.go:172] (0x8dcf5e0) (0xa2e99d0) Stream removed, broadcasting: 3
I0821 06:54:36.135621      10 log.go:172] (0x8dcf5e0) (0x9f79880) Stream removed, broadcasting: 5
Aug 21 06:54:36.135: INFO: Exec stderr: ""
Aug 21 06:54:36.136: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6933 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 06:54:36.136: INFO: >>> kubeConfig: /root/.kube/config
I0821 06:54:36.239410      10 log.go:172] (0xa970930) (0xa971030) Create stream
I0821 06:54:36.239556      10 log.go:172] (0xa970930) (0xa971030) Stream added, broadcasting: 1
I0821 06:54:36.242823      10 log.go:172] (0xa970930) Reply frame received for 1
I0821 06:54:36.243015      10 log.go:172] (0xa970930) (0x70b0700) Create stream
I0821 06:54:36.243106      10 log.go:172] (0xa970930) (0x70b0700) Stream added, broadcasting: 3
I0821 06:54:36.244503      10 log.go:172] (0xa970930) Reply frame received for 3
I0821 06:54:36.244720      10 log.go:172] (0xa970930) (0xa6c0380) Create stream
I0821 06:54:36.244863      10 log.go:172] (0xa970930) (0xa6c0380) Stream added, broadcasting: 5
I0821 06:54:36.246159      10 log.go:172] (0xa970930) Reply frame received for 5
I0821 06:54:36.315676      10 log.go:172] (0xa970930) Data frame received for 5
I0821 06:54:36.315943      10 log.go:172] (0xa6c0380) (5) Data frame handling
I0821 06:54:36.316085      10 log.go:172] (0xa970930) Data frame received for 3
I0821 06:54:36.316246      10 log.go:172] (0x70b0700) (3) Data frame handling
I0821 06:54:36.316439      10 log.go:172] (0x70b0700) (3) Data frame sent
I0821 06:54:36.316618      10 log.go:172] (0xa970930) Data frame received for 3
I0821 06:54:36.316941      10 log.go:172] (0x70b0700) (3) Data frame handling
I0821 06:54:36.317374      10 log.go:172] (0xa970930) Data frame received for 1
I0821 06:54:36.317535      10 log.go:172] (0xa971030) (1) Data frame handling
I0821 06:54:36.317661      10 log.go:172] (0xa971030) (1) Data frame sent
I0821 06:54:36.317843      10 log.go:172] (0xa970930) (0xa971030) Stream removed, broadcasting: 1
I0821 06:54:36.318065      10 log.go:172] (0xa970930) Go away received
I0821 06:54:36.318709      10 log.go:172] (0xa970930) (0xa971030) Stream removed, broadcasting: 1
I0821 06:54:36.318878      10 log.go:172] (0xa970930) (0x70b0700) Stream removed, broadcasting: 3
I0821 06:54:36.319020      10 log.go:172] (0xa970930) (0xa6c0380) Stream removed, broadcasting: 5
Aug 21 06:54:36.319: INFO: Exec stderr: ""
Aug 21 06:54:36.319: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6933 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 06:54:36.319: INFO: >>> kubeConfig: /root/.kube/config
I0821 06:54:36.424990      10 log.go:172] (0xaaf0fc0) (0xaaf1180) Create stream
I0821 06:54:36.425244      10 log.go:172] (0xaaf0fc0) (0xaaf1180) Stream added, broadcasting: 1
I0821 06:54:36.430538      10 log.go:172] (0xaaf0fc0) Reply frame received for 1
I0821 06:54:36.430750      10 log.go:172] (0xaaf0fc0) (0x868b0a0) Create stream
I0821 06:54:36.430869      10 log.go:172] (0xaaf0fc0) (0x868b0a0) Stream added, broadcasting: 3
I0821 06:54:36.432534      10 log.go:172] (0xaaf0fc0) Reply frame received for 3
I0821 06:54:36.432923      10 log.go:172] (0xaaf0fc0) (0xa2e9e30) Create stream
I0821 06:54:36.433095      10 log.go:172] (0xaaf0fc0) (0xa2e9e30) Stream added, broadcasting: 5
I0821 06:54:36.434985      10 log.go:172] (0xaaf0fc0) Reply frame received for 5
I0821 06:54:36.509360      10 log.go:172] (0xaaf0fc0) Data frame received for 5
I0821 06:54:36.509568      10 log.go:172] (0xa2e9e30) (5) Data frame handling
I0821 06:54:36.509697      10 log.go:172] (0xaaf0fc0) Data frame received for 3
I0821 06:54:36.509773      10 log.go:172] (0x868b0a0) (3) Data frame handling
I0821 06:54:36.509867      10 log.go:172] (0x868b0a0) (3) Data frame sent
I0821 06:54:36.509943      10 log.go:172] (0xaaf0fc0) Data frame received for 3
I0821 06:54:36.510015      10 log.go:172] (0x868b0a0) (3) Data frame handling
I0821 06:54:36.510733      10 log.go:172] (0xaaf0fc0) Data frame received for 1
I0821 06:54:36.510849      10 log.go:172] (0xaaf1180) (1) Data frame handling
I0821 06:54:36.510944      10 log.go:172] (0xaaf1180) (1) Data frame sent
I0821 06:54:36.511038      10 log.go:172] (0xaaf0fc0) (0xaaf1180) Stream removed, broadcasting: 1
I0821 06:54:36.511150      10 log.go:172] (0xaaf0fc0) Go away received
I0821 06:54:36.511524      10 log.go:172] (0xaaf0fc0) (0xaaf1180) Stream removed, broadcasting: 1
I0821 06:54:36.511695      10 log.go:172] (0xaaf0fc0) (0x868b0a0) Stream removed, broadcasting: 3
I0821 06:54:36.511821      10 log.go:172] (0xaaf0fc0) (0xa2e9e30) Stream removed, broadcasting: 5
Aug 21 06:54:36.511: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Aug 21 06:54:36.512: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6933 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 06:54:36.512: INFO: >>> kubeConfig: /root/.kube/config
I0821 06:54:36.619264      10 log.go:172] (0x78990a0) (0x7899340) Create stream
I0821 06:54:36.619455      10 log.go:172] (0x78990a0) (0x7899340) Stream added, broadcasting: 1
I0821 06:54:36.624667      10 log.go:172] (0x78990a0) Reply frame received for 1
I0821 06:54:36.625019      10 log.go:172] (0x78990a0) (0xbd66fc0) Create stream
I0821 06:54:36.625147      10 log.go:172] (0x78990a0) (0xbd66fc0) Stream added, broadcasting: 3
I0821 06:54:36.626852      10 log.go:172] (0x78990a0) Reply frame received for 3
I0821 06:54:36.627005      10 log.go:172] (0x78990a0) (0xbd67500) Create stream
I0821 06:54:36.627100      10 log.go:172] (0x78990a0) (0xbd67500) Stream added, broadcasting: 5
I0821 06:54:36.628660      10 log.go:172] (0x78990a0) Reply frame received for 5
I0821 06:54:36.692899      10 log.go:172] (0x78990a0) Data frame received for 3
I0821 06:54:36.693116      10 log.go:172] (0xbd66fc0) (3) Data frame handling
I0821 06:54:36.693239      10 log.go:172] (0x78990a0) Data frame received for 5
I0821 06:54:36.693428      10 log.go:172] (0xbd67500) (5) Data frame handling
I0821 06:54:36.693569      10 log.go:172] (0xbd66fc0) (3) Data frame sent
I0821 06:54:36.693748      10 log.go:172] (0x78990a0) Data frame received for 3
I0821 06:54:36.693918      10 log.go:172] (0xbd66fc0) (3) Data frame handling
I0821 06:54:36.694385      10 log.go:172] (0x78990a0) Data frame received for 1
I0821 06:54:36.694633      10 log.go:172] (0x7899340) (1) Data frame handling
I0821 06:54:36.694843      10 log.go:172] (0x7899340) (1) Data frame sent
I0821 06:54:36.695076      10 log.go:172] (0x78990a0) (0x7899340) Stream removed, broadcasting: 1
I0821 06:54:36.695250      10 log.go:172] (0x78990a0) Go away received
I0821 06:54:36.695624      10 log.go:172] (0x78990a0) (0x7899340) Stream removed, broadcasting: 1
I0821 06:54:36.695783      10 log.go:172] (0x78990a0) (0xbd66fc0) Stream removed, broadcasting: 3
I0821 06:54:36.695937      10 log.go:172] (0x78990a0) (0xbd67500) Stream removed, broadcasting: 5
Aug 21 06:54:36.696: INFO: Exec stderr: ""
Aug 21 06:54:36.696: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6933 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 06:54:36.696: INFO: >>> kubeConfig: /root/.kube/config
I0821 06:54:36.800875      10 log.go:172] (0xaf04620) (0xaf04700) Create stream
I0821 06:54:36.801025      10 log.go:172] (0xaf04620) (0xaf04700) Stream added, broadcasting: 1
I0821 06:54:36.807621      10 log.go:172] (0xaf04620) Reply frame received for 1
I0821 06:54:36.807751      10 log.go:172] (0xaf04620) (0xaf04a80) Create stream
I0821 06:54:36.807813      10 log.go:172] (0xaf04620) (0xaf04a80) Stream added, broadcasting: 3
I0821 06:54:36.809417      10 log.go:172] (0xaf04620) Reply frame received for 3
I0821 06:54:36.809646      10 log.go:172] (0xaf04620) (0x8fd63f0) Create stream
I0821 06:54:36.809766      10 log.go:172] (0xaf04620) (0x8fd63f0) Stream added, broadcasting: 5
I0821 06:54:36.811159      10 log.go:172] (0xaf04620) Reply frame received for 5
I0821 06:54:36.880162      10 log.go:172] (0xaf04620) Data frame received for 3
I0821 06:54:36.880461      10 log.go:172] (0xaf04a80) (3) Data frame handling
I0821 06:54:36.880819      10 log.go:172] (0xaf04620) Data frame received for 5
I0821 06:54:36.881007      10 log.go:172] (0x8fd63f0) (5) Data frame handling
I0821 06:54:36.881203      10 log.go:172] (0xaf04a80) (3) Data frame sent
I0821 06:54:36.881401      10 log.go:172] (0xaf04620) Data frame received for 3
I0821 06:54:36.881572      10 log.go:172] (0xaf04a80) (3) Data frame handling
I0821 06:54:36.882070      10 log.go:172] (0xaf04620) Data frame received for 1
I0821 06:54:36.882226      10 log.go:172] (0xaf04700) (1) Data frame handling
I0821 06:54:36.882364      10 log.go:172] (0xaf04700) (1) Data frame sent
I0821 06:54:36.882534      10 log.go:172] (0xaf04620) (0xaf04700) Stream removed, broadcasting: 1
I0821 06:54:36.882756      10 log.go:172] (0xaf04620) Go away received
I0821 06:54:36.883396      10 log.go:172] (0xaf04620) (0xaf04700) Stream removed, broadcasting: 1
I0821 06:54:36.883596      10 log.go:172] (0xaf04620) (0xaf04a80) Stream removed, broadcasting: 3
I0821 06:54:36.883770      10 log.go:172] (0xaf04620) (0x8fd63f0) Stream removed, broadcasting: 5
Aug 21 06:54:36.883: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Aug 21 06:54:36.884: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6933 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 06:54:36.884: INFO: >>> kubeConfig: /root/.kube/config
I0821 06:54:36.998084      10 log.go:172] (0x99b3340) (0x99b3500) Create stream
I0821 06:54:36.998274      10 log.go:172] (0x99b3340) (0x99b3500) Stream added, broadcasting: 1
I0821 06:54:37.002348      10 log.go:172] (0x99b3340) Reply frame received for 1
I0821 06:54:37.002584      10 log.go:172] (0x99b3340) (0xaba0380) Create stream
I0821 06:54:37.002714      10 log.go:172] (0x99b3340) (0xaba0380) Stream added, broadcasting: 3
I0821 06:54:37.004883      10 log.go:172] (0x99b3340) Reply frame received for 3
I0821 06:54:37.005086      10 log.go:172] (0x99b3340) (0x99b3880) Create stream
I0821 06:54:37.005223      10 log.go:172] (0x99b3340) (0x99b3880) Stream added, broadcasting: 5
I0821 06:54:37.006918      10 log.go:172] (0x99b3340) Reply frame received for 5
I0821 06:54:37.072455      10 log.go:172] (0x99b3340) Data frame received for 3
I0821 06:54:37.072698      10 log.go:172] (0xaba0380) (3) Data frame handling
I0821 06:54:37.072976      10 log.go:172] (0x99b3340) Data frame received for 5
I0821 06:54:37.073256      10 log.go:172] (0x99b3880) (5) Data frame handling
I0821 06:54:37.073519      10 log.go:172] (0xaba0380) (3) Data frame sent
I0821 06:54:37.073693      10 log.go:172] (0x99b3340) Data frame received for 3
I0821 06:54:37.073857      10 log.go:172] (0xaba0380) (3) Data frame handling
I0821 06:54:37.074093      10 log.go:172] (0x99b3340) Data frame received for 1
I0821 06:54:37.074258      10 log.go:172] (0x99b3500) (1) Data frame handling
I0821 06:54:37.074384      10 log.go:172] (0x99b3500) (1) Data frame sent
I0821 06:54:37.074504      10 log.go:172] (0x99b3340) (0x99b3500) Stream removed, broadcasting: 1
I0821 06:54:37.074648      10 log.go:172] (0x99b3340) Go away received
I0821 06:54:37.074970      10 log.go:172] (0x99b3340) (0x99b3500) Stream removed, broadcasting: 1
I0821 06:54:37.075070      10 log.go:172] (0x99b3340) (0xaba0380) Stream removed, broadcasting: 3
I0821 06:54:37.075158      10 log.go:172] (0x99b3340) (0x99b3880) Stream removed, broadcasting: 5
Aug 21 06:54:37.075: INFO: Exec stderr: ""
Aug 21 06:54:37.075: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6933 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 06:54:37.075: INFO: >>> kubeConfig: /root/.kube/config
I0821 06:54:37.180612      10 log.go:172] (0x8632000) (0x86320e0) Create stream
I0821 06:54:37.180848      10 log.go:172] (0x8632000) (0x86320e0) Stream added, broadcasting: 1
I0821 06:54:37.189450      10 log.go:172] (0x8632000) Reply frame received for 1
I0821 06:54:37.189620      10 log.go:172] (0x8632000) (0x86322a0) Create stream
I0821 06:54:37.189707      10 log.go:172] (0x8632000) (0x86322a0) Stream added, broadcasting: 3
I0821 06:54:37.191094      10 log.go:172] (0x8632000) Reply frame received for 3
I0821 06:54:37.191206      10 log.go:172] (0x8632000) (0x861c5b0) Create stream
I0821 06:54:37.191264      10 log.go:172] (0x8632000) (0x861c5b0) Stream added, broadcasting: 5
I0821 06:54:37.192459      10 log.go:172] (0x8632000) Reply frame received for 5
I0821 06:54:37.258664      10 log.go:172] (0x8632000) Data frame received for 3
I0821 06:54:37.258969      10 log.go:172] (0x86322a0) (3) Data frame handling
I0821 06:54:37.259192      10 log.go:172] (0x86322a0) (3) Data frame sent
I0821 06:54:37.259400      10 log.go:172] (0x8632000) Data frame received for 3
I0821 06:54:37.259602      10 log.go:172] (0x86322a0) (3) Data frame handling
I0821 06:54:37.259817      10 log.go:172] (0x8632000) Data frame received for 5
I0821 06:54:37.260000      10 log.go:172] (0x861c5b0) (5) Data frame handling
I0821 06:54:37.260182      10 log.go:172] (0x8632000) Data frame received for 1
I0821 06:54:37.260398      10 log.go:172] (0x86320e0) (1) Data frame handling
I0821 06:54:37.260596      10 log.go:172] (0x86320e0) (1) Data frame sent
I0821 06:54:37.260909      10 log.go:172] (0x8632000) (0x86320e0) Stream removed, broadcasting: 1
I0821 06:54:37.261113      10 log.go:172] (0x8632000) Go away received
I0821 06:54:37.261450      10 log.go:172] (0x8632000) (0x86320e0) Stream removed, broadcasting: 1
I0821 06:54:37.261612      10 log.go:172] (0x8632000) (0x86322a0) Stream removed, broadcasting: 3
I0821 06:54:37.261736      10 log.go:172] (0x8632000) (0x861c5b0) Stream removed, broadcasting: 5
Aug 21 06:54:37.261: INFO: Exec stderr: ""
Aug 21 06:54:37.262: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6933 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 06:54:37.262: INFO: >>> kubeConfig: /root/.kube/config
I0821 06:54:37.373052      10 log.go:172] (0x8a680e0) (0x8a68230) Create stream
I0821 06:54:37.373260      10 log.go:172] (0x8a680e0) (0x8a68230) Stream added, broadcasting: 1
I0821 06:54:37.377000      10 log.go:172] (0x8a680e0) Reply frame received for 1
I0821 06:54:37.377203      10 log.go:172] (0x8a680e0) (0x8a685b0) Create stream
I0821 06:54:37.377311      10 log.go:172] (0x8a680e0) (0x8a685b0) Stream added, broadcasting: 3
I0821 06:54:37.379162      10 log.go:172] (0x8a680e0) Reply frame received for 3
I0821 06:54:37.379295      10 log.go:172] (0x8a680e0) (0x86325b0) Create stream
I0821 06:54:37.379359      10 log.go:172] (0x8a680e0) (0x86325b0) Stream added, broadcasting: 5
I0821 06:54:37.380576      10 log.go:172] (0x8a680e0) Reply frame received for 5
I0821 06:54:37.461582      10 log.go:172] (0x8a680e0) Data frame received for 3
I0821 06:54:37.461760      10 log.go:172] (0x8a685b0) (3) Data frame handling
I0821 06:54:37.461894      10 log.go:172] (0x8a680e0) Data frame received for 5
I0821 06:54:37.462023      10 log.go:172] (0x86325b0) (5) Data frame handling
I0821 06:54:37.462125      10 log.go:172] (0x8a685b0) (3) Data frame sent
I0821 06:54:37.462256      10 log.go:172] (0x8a680e0) Data frame received for 3
I0821 06:54:37.462342      10 log.go:172] (0x8a685b0) (3) Data frame handling
I0821 06:54:37.462643      10 log.go:172] (0x8a680e0) Data frame received for 1
I0821 06:54:37.462740      10 log.go:172] (0x8a68230) (1) Data frame handling
I0821 06:54:37.462832      10 log.go:172] (0x8a68230) (1) Data frame sent
I0821 06:54:37.462921      10 log.go:172] (0x8a680e0) (0x8a68230) Stream removed, broadcasting: 1
I0821 06:54:37.463019      10 log.go:172] (0x8a680e0) Go away received
I0821 06:54:37.463439      10 log.go:172] (0x8a680e0) (0x8a68230) Stream removed, broadcasting: 1
I0821 06:54:37.463602      10 log.go:172] (0x8a680e0) (0x8a685b0) Stream removed, broadcasting: 3
I0821 06:54:37.463712      10 log.go:172] (0x8a680e0) (0x86325b0) Stream removed, broadcasting: 5
Aug 21 06:54:37.463: INFO: Exec stderr: ""
Aug 21 06:54:37.463: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6933 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 06:54:37.464: INFO: >>> kubeConfig: /root/.kube/config
I0821 06:54:37.573335      10 log.go:172] (0xaba13b0) (0xaba1490) Create stream
I0821 06:54:37.573481      10 log.go:172] (0xaba13b0) (0xaba1490) Stream added, broadcasting: 1
I0821 06:54:37.578503      10 log.go:172] (0xaba13b0) Reply frame received for 1
I0821 06:54:37.578716      10 log.go:172] (0xaba13b0) (0xaf059d0) Create stream
I0821 06:54:37.578838      10 log.go:172] (0xaba13b0) (0xaf059d0) Stream added, broadcasting: 3
I0821 06:54:37.580640      10 log.go:172] (0xaba13b0) Reply frame received for 3
I0821 06:54:37.580879      10 log.go:172] (0xaba13b0) (0x8a68930) Create stream
I0821 06:54:37.580964      10 log.go:172] (0xaba13b0) (0x8a68930) Stream added, broadcasting: 5
I0821 06:54:37.582386      10 log.go:172] (0xaba13b0) Reply frame received for 5
I0821 06:54:37.644965      10 log.go:172] (0xaba13b0) Data frame received for 3
I0821 06:54:37.645142      10 log.go:172] (0xaf059d0) (3) Data frame handling
I0821 06:54:37.645309      10 log.go:172] (0xaba13b0) Data frame received for 5
I0821 06:54:37.645520      10 log.go:172] (0x8a68930) (5) Data frame handling
I0821 06:54:37.645672      10 log.go:172] (0xaf059d0) (3) Data frame sent
I0821 06:54:37.645791      10 log.go:172] (0xaba13b0) Data frame received for 3
I0821 06:54:37.645912      10 log.go:172] (0xaf059d0) (3) Data frame handling
I0821 06:54:37.646342      10 log.go:172] (0xaba13b0) Data frame received for 1
I0821 06:54:37.646472      10 log.go:172] (0xaba1490) (1) Data frame handling
I0821 06:54:37.646606      10 log.go:172] (0xaba1490) (1) Data frame sent
I0821 06:54:37.646775      10 log.go:172] (0xaba13b0) (0xaba1490) Stream removed, broadcasting: 1
I0821 06:54:37.646897      10 log.go:172] (0xaba13b0) Go away received
I0821 06:54:37.647172      10 log.go:172] (0xaba13b0) (0xaba1490) Stream removed, broadcasting: 1
I0821 06:54:37.647323      10 log.go:172] (0xaba13b0) (0xaf059d0) Stream removed, broadcasting: 3
I0821 06:54:37.647408      10 log.go:172] (0xaba13b0) (0x8a68930) Stream removed, broadcasting: 5
Aug 21 06:54:37.647: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:54:37.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-6933" for this suite.

• [SLOW TEST:12.145 seconds]
[k8s.io] KubeletManagedEtcHosts
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":176,"skipped":3039,"failed":0}
SSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:54:37.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Starting the proxy
Aug 21 06:54:37.765: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix559579110/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:54:38.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2120" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":275,"completed":177,"skipped":3045,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:54:38.658: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
[It] should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:54:38.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1515" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":178,"skipped":3107,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:54:38.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Aug 21 06:54:46.597: INFO: 0 pods remaining
Aug 21 06:54:46.598: INFO: 0 pods has nil DeletionTimestamp
Aug 21 06:54:46.598: INFO: 
Aug 21 06:54:47.763: INFO: 0 pods remaining
Aug 21 06:54:47.764: INFO: 0 pods has nil DeletionTimestamp
Aug 21 06:54:47.764: INFO: 
STEP: Gathering metrics
W0821 06:54:49.017900      10 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 21 06:54:49.018: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:54:49.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8917" for this suite.

• [SLOW TEST:10.756 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":179,"skipped":3155,"failed":0}
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:54:49.615: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Aug 21 06:54:49.960: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 21 06:54:50.094: INFO: Waiting for terminating namespaces to be deleted...
Aug 21 06:54:50.098: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Aug 21 06:54:50.113: INFO: test-pod from e2e-kubelet-etc-hosts-6933 started at 2020-08-21 06:54:25 +0000 UTC (3 container statuses recorded)
Aug 21 06:54:50.113: INFO: 	Container busybox-1 ready: true, restart count 0
Aug 21 06:54:50.113: INFO: 	Container busybox-2 ready: true, restart count 0
Aug 21 06:54:50.113: INFO: 	Container busybox-3 ready: true, restart count 0
Aug 21 06:54:50.113: INFO: kindnet-kkxd5 from kube-system started at 2020-08-15 09:40:28 +0000 UTC (1 container statuses recorded)
Aug 21 06:54:50.113: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 21 06:54:50.113: INFO: kube-proxy-vn4t5 from kube-system started at 2020-08-15 09:40:28 +0000 UTC (1 container statuses recorded)
Aug 21 06:54:50.113: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 21 06:54:50.113: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Aug 21 06:54:50.126: INFO: kube-proxy-c52ll from kube-system started at 2020-08-15 09:40:30 +0000 UTC (1 container statuses recorded)
Aug 21 06:54:50.126: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 21 06:54:50.126: INFO: test-host-network-pod from e2e-kubelet-etc-hosts-6933 started at 2020-08-21 06:54:31 +0000 UTC (2 container statuses recorded)
Aug 21 06:54:50.126: INFO: 	Container busybox-1 ready: true, restart count 0
Aug 21 06:54:50.126: INFO: 	Container busybox-2 ready: true, restart count 0
Aug 21 06:54:50.126: INFO: kindnet-qzfqb from kube-system started at 2020-08-15 09:40:30 +0000 UTC (1 container statuses recorded)
Aug 21 06:54:50.126: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.162d36113a62667c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.162d36113fad601d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:54:51.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4551" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":275,"completed":180,"skipped":3155,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:54:51.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-8cd1d0fc-490e-4f5e-a5f3-8823bbe09423
STEP: Creating a pod to test consume secrets
Aug 21 06:54:51.468: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9e13288f-fa41-4399-9f72-81bcb4bac3f2" in namespace "projected-4672" to be "Succeeded or Failed"
Aug 21 06:54:51.478: INFO: Pod "pod-projected-secrets-9e13288f-fa41-4399-9f72-81bcb4bac3f2": Phase="Pending", Reason="", readiness=false. Elapsed: 9.12404ms
Aug 21 06:54:53.486: INFO: Pod "pod-projected-secrets-9e13288f-fa41-4399-9f72-81bcb4bac3f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01712155s
Aug 21 06:54:55.497: INFO: Pod "pod-projected-secrets-9e13288f-fa41-4399-9f72-81bcb4bac3f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027798273s
STEP: Saw pod success
Aug 21 06:54:55.497: INFO: Pod "pod-projected-secrets-9e13288f-fa41-4399-9f72-81bcb4bac3f2" satisfied condition "Succeeded or Failed"
Aug 21 06:54:55.501: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-9e13288f-fa41-4399-9f72-81bcb4bac3f2 container projected-secret-volume-test: 
STEP: delete the pod
Aug 21 06:54:55.553: INFO: Waiting for pod pod-projected-secrets-9e13288f-fa41-4399-9f72-81bcb4bac3f2 to disappear
Aug 21 06:54:55.561: INFO: Pod pod-projected-secrets-9e13288f-fa41-4399-9f72-81bcb4bac3f2 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:54:55.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4672" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":181,"skipped":3156,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:54:55.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service nodeport-service with the type=NodePort in namespace services-7030
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-7030
STEP: creating replication controller externalsvc in namespace services-7030
I0821 06:54:55.754772      10 runners.go:190] Created replication controller with name: externalsvc, namespace: services-7030, replica count: 2
I0821 06:54:58.806223      10 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0821 06:55:01.807062      10 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Aug 21 06:55:01.898: INFO: Creating new exec pod
Aug 21 06:55:05.986: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=services-7030 execpodjvfx5 -- /bin/sh -x -c nslookup nodeport-service'
Aug 21 06:55:10.068: INFO: stderr: "I0821 06:55:09.933041    3254 log.go:172] (0x29c2af0) (0x29c2c40) Create stream\nI0821 06:55:09.934725    3254 log.go:172] (0x29c2af0) (0x29c2c40) Stream added, broadcasting: 1\nI0821 06:55:09.946582    3254 log.go:172] (0x29c2af0) Reply frame received for 1\nI0821 06:55:09.947046    3254 log.go:172] (0x29c2af0) (0x2be80e0) Create stream\nI0821 06:55:09.947110    3254 log.go:172] (0x29c2af0) (0x2be80e0) Stream added, broadcasting: 3\nI0821 06:55:09.948669    3254 log.go:172] (0x29c2af0) Reply frame received for 3\nI0821 06:55:09.949346    3254 log.go:172] (0x29c2af0) (0x29c36c0) Create stream\nI0821 06:55:09.949478    3254 log.go:172] (0x29c2af0) (0x29c36c0) Stream added, broadcasting: 5\nI0821 06:55:09.951175    3254 log.go:172] (0x29c2af0) Reply frame received for 5\nI0821 06:55:10.029067    3254 log.go:172] (0x29c2af0) Data frame received for 5\nI0821 06:55:10.029273    3254 log.go:172] (0x29c36c0) (5) Data frame handling\nI0821 06:55:10.029594    3254 log.go:172] (0x29c36c0) (5) Data frame sent\n+ nslookup nodeport-service\nI0821 06:55:10.038218    3254 log.go:172] (0x29c2af0) Data frame received for 3\nI0821 06:55:10.038430    3254 log.go:172] (0x2be80e0) (3) Data frame handling\nI0821 06:55:10.038631    3254 log.go:172] (0x2be80e0) (3) Data frame sent\nI0821 06:55:10.039294    3254 log.go:172] (0x29c2af0) Data frame received for 3\nI0821 06:55:10.039405    3254 log.go:172] (0x2be80e0) (3) Data frame handling\nI0821 06:55:10.039552    3254 log.go:172] (0x2be80e0) (3) Data frame sent\nI0821 06:55:10.039931    3254 log.go:172] (0x29c2af0) Data frame received for 5\nI0821 06:55:10.040107    3254 log.go:172] (0x29c2af0) Data frame received for 3\nI0821 06:55:10.040248    3254 log.go:172] (0x2be80e0) (3) Data frame handling\nI0821 06:55:10.040358    3254 log.go:172] (0x29c36c0) (5) Data frame handling\nI0821 06:55:10.042138    3254 log.go:172] (0x29c2af0) Data frame received for 1\nI0821 06:55:10.042338    3254 log.go:172] (0x29c2c40) (1) Data frame handling\nI0821 06:55:10.042562    3254 log.go:172] (0x29c2c40) (1) Data frame sent\nI0821 06:55:10.044585    3254 log.go:172] (0x29c2af0) (0x29c2c40) Stream removed, broadcasting: 1\nI0821 06:55:10.045074    3254 log.go:172] (0x29c2af0) Go away received\nI0821 06:55:10.057824    3254 log.go:172] (0x29c2af0) (0x29c2c40) Stream removed, broadcasting: 1\nI0821 06:55:10.058104    3254 log.go:172] (0x29c2af0) (0x2be80e0) Stream removed, broadcasting: 3\nI0821 06:55:10.058324    3254 log.go:172] (0x29c2af0) (0x29c36c0) Stream removed, broadcasting: 5\n"
Aug 21 06:55:10.070: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-7030.svc.cluster.local\tcanonical name = externalsvc.services-7030.svc.cluster.local.\nName:\texternalsvc.services-7030.svc.cluster.local\nAddress: 10.107.214.69\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-7030, will wait for the garbage collector to delete the pods
Aug 21 06:55:10.134: INFO: Deleting ReplicationController externalsvc took: 8.280721ms
Aug 21 06:55:10.535: INFO: Terminating ReplicationController externalsvc pods took: 400.973193ms
Aug 21 06:55:19.196: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:55:19.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7030" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:23.697 seconds]
[sig-network] Services
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":182,"skipped":3180,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:55:19.277: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl replace
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 21 06:55:19.356: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-2606'
Aug 21 06:55:20.548: INFO: stderr: ""
Aug 21 06:55:20.548: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Aug 21 06:55:25.601: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-2606 -o json'
Aug 21 06:55:26.732: INFO: stderr: ""
Aug 21 06:55:26.732: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-08-21T06:55:20Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"managedFields\": [\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:metadata\": {\n                        \"f:labels\": {\n                            \".\": {},\n                            \"f:run\": {}\n                        }\n                    },\n                    \"f:spec\": {\n                        \"f:containers\": {\n                            \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n                                \".\": {},\n                                \"f:image\": {},\n                                \"f:imagePullPolicy\": {},\n                                \"f:name\": {},\n                                \"f:resources\": {},\n                                \"f:terminationMessagePath\": {},\n                                \"f:terminationMessagePolicy\": {}\n                            }\n                        },\n                        \"f:dnsPolicy\": {},\n                        \"f:enableServiceLinks\": {},\n                        \"f:restartPolicy\": {},\n                        \"f:schedulerName\": {},\n                        \"f:securityContext\": {},\n                        \"f:terminationGracePeriodSeconds\": {}\n                    }\n                },\n                \"manager\": \"kubectl\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-08-21T06:55:20Z\"\n            },\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:status\": {\n                        \"f:conditions\": {\n                            \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            }\n                        },\n                        \"f:containerStatuses\": {},\n                        \"f:hostIP\": {},\n                        \"f:phase\": {},\n                        \"f:podIP\": {},\n                        \"f:podIPs\": {\n                            \".\": {},\n                            \"k:{\\\"ip\\\":\\\"10.244.2.234\\\"}\": {\n                                \".\": {},\n                                \"f:ip\": {}\n                            }\n                        },\n                        \"f:startTime\": {}\n                    }\n                },\n                \"manager\": \"kubelet\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-08-21T06:55:24Z\"\n            }\n        ],\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-2606\",\n        \"resourceVersion\": \"2029525\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-2606/pods/e2e-test-httpd-pod\",\n        \"uid\": \"642392d1-e12a-4a2a-8372-7820a227bfe6\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-skqfw\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"kali-worker\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-skqfw\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-skqfw\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-21T06:55:20Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-21T06:55:24Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-21T06:55:24Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-21T06:55:20Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://444e3d5ae1fed62838412a9cd438ecd0aa65b97b722f4cfb13577eed627dbd7c\",\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-08-21T06:55:24Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.16\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.2.234\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.244.2.234\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-08-21T06:55:20Z\"\n    }\n}\n"
STEP: replace the image in the pod
Aug 21 06:55:26.737: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-2606'
Aug 21 06:55:28.295: INFO: stderr: ""
Aug 21 06:55:28.295: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Aug 21 06:55:28.310: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2606'
Aug 21 06:55:31.213: INFO: stderr: ""
Aug 21 06:55:31.213: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:55:31.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2606" for this suite.

• [SLOW TEST:11.965 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450
    should update a single-container pod's image  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":275,"completed":183,"skipped":3191,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:55:31.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 21 06:55:31.377: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c9e22826-b092-4b1b-9019-17306d197922" in namespace "projected-9221" to be "Succeeded or Failed"
Aug 21 06:55:31.384: INFO: Pod "downwardapi-volume-c9e22826-b092-4b1b-9019-17306d197922": Phase="Pending", Reason="", readiness=false. Elapsed: 7.477836ms
Aug 21 06:55:33.393: INFO: Pod "downwardapi-volume-c9e22826-b092-4b1b-9019-17306d197922": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015741889s
Aug 21 06:55:35.400: INFO: Pod "downwardapi-volume-c9e22826-b092-4b1b-9019-17306d197922": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023490348s
STEP: Saw pod success
Aug 21 06:55:35.401: INFO: Pod "downwardapi-volume-c9e22826-b092-4b1b-9019-17306d197922" satisfied condition "Succeeded or Failed"
Aug 21 06:55:35.406: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-c9e22826-b092-4b1b-9019-17306d197922 container client-container: 
STEP: delete the pod
Aug 21 06:55:35.428: INFO: Waiting for pod downwardapi-volume-c9e22826-b092-4b1b-9019-17306d197922 to disappear
Aug 21 06:55:35.440: INFO: Pod downwardapi-volume-c9e22826-b092-4b1b-9019-17306d197922 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:55:35.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9221" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":184,"skipped":3207,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected combined
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:55:35.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-projected-all-test-volume-6149b55c-50a8-4786-8c7f-eb4c586b4d02
STEP: Creating secret with name secret-projected-all-test-volume-ca3f0cbf-997d-4caf-a874-9c1997dc98e7
STEP: Creating a pod to test Check all projections for projected volume plugin
Aug 21 06:55:35.553: INFO: Waiting up to 5m0s for pod "projected-volume-750caff5-58f1-45af-b441-2e2d06fc14d3" in namespace "projected-7403" to be "Succeeded or Failed"
Aug 21 06:55:35.573: INFO: Pod "projected-volume-750caff5-58f1-45af-b441-2e2d06fc14d3": Phase="Pending", Reason="", readiness=false. Elapsed: 19.086755ms
Aug 21 06:55:37.580: INFO: Pod "projected-volume-750caff5-58f1-45af-b441-2e2d06fc14d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025992955s
Aug 21 06:55:39.588: INFO: Pod "projected-volume-750caff5-58f1-45af-b441-2e2d06fc14d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03420084s
STEP: Saw pod success
Aug 21 06:55:39.588: INFO: Pod "projected-volume-750caff5-58f1-45af-b441-2e2d06fc14d3" satisfied condition "Succeeded or Failed"
Aug 21 06:55:39.594: INFO: Trying to get logs from node kali-worker2 pod projected-volume-750caff5-58f1-45af-b441-2e2d06fc14d3 container projected-all-volume-test: 
STEP: delete the pod
Aug 21 06:55:39.634: INFO: Waiting for pod projected-volume-750caff5-58f1-45af-b441-2e2d06fc14d3 to disappear
Aug 21 06:55:39.646: INFO: Pod projected-volume-750caff5-58f1-45af-b441-2e2d06fc14d3 no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:55:39.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7403" for this suite.
•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":185,"skipped":3214,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:55:39.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 21 06:55:44.332: INFO: Successfully updated pod "pod-update-activedeadlineseconds-55950417-e27f-40aa-8222-aaf8dae41b95"
Aug 21 06:55:44.332: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-55950417-e27f-40aa-8222-aaf8dae41b95" in namespace "pods-454" to be "terminated due to deadline exceeded"
Aug 21 06:55:44.337: INFO: Pod "pod-update-activedeadlineseconds-55950417-e27f-40aa-8222-aaf8dae41b95": Phase="Running", Reason="", readiness=true. Elapsed: 4.382689ms
Aug 21 06:55:46.493: INFO: Pod "pod-update-activedeadlineseconds-55950417-e27f-40aa-8222-aaf8dae41b95": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.160623414s
Aug 21 06:55:46.494: INFO: Pod "pod-update-activedeadlineseconds-55950417-e27f-40aa-8222-aaf8dae41b95" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:55:46.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-454" for this suite.

• [SLOW TEST:6.872 seconds]
[k8s.io] Pods
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":186,"skipped":3236,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:55:46.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 06:55:55.525: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 06:55:57.545: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733589755, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733589755, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733589755, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733589755, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 06:56:00.674: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:56:00.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9720" for this suite.
STEP: Destroying namespace "webhook-9720-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:14.267 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":187,"skipped":3263,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:56:00.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop simple daemon [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 21 06:56:00.940: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:56:00.953: INFO: Number of nodes with available pods: 0
Aug 21 06:56:00.953: INFO: Node kali-worker is running more than one daemon pod
Aug 21 06:56:01.963: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:56:01.969: INFO: Number of nodes with available pods: 0
Aug 21 06:56:01.969: INFO: Node kali-worker is running more than one daemon pod
Aug 21 06:56:03.061: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:56:03.067: INFO: Number of nodes with available pods: 0
Aug 21 06:56:03.067: INFO: Node kali-worker is running more than one daemon pod
Aug 21 06:56:03.963: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:56:03.970: INFO: Number of nodes with available pods: 0
Aug 21 06:56:03.970: INFO: Node kali-worker is running more than one daemon pod
Aug 21 06:56:04.965: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:56:04.973: INFO: Number of nodes with available pods: 1
Aug 21 06:56:04.973: INFO: Node kali-worker is running more than one daemon pod
Aug 21 06:56:06.000: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:56:06.023: INFO: Number of nodes with available pods: 2
Aug 21 06:56:06.023: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Aug 21 06:56:06.082: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:56:06.088: INFO: Number of nodes with available pods: 1
Aug 21 06:56:06.088: INFO: Node kali-worker is running more than one daemon pod
Aug 21 06:56:07.101: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:56:07.107: INFO: Number of nodes with available pods: 1
Aug 21 06:56:07.107: INFO: Node kali-worker is running more than one daemon pod
Aug 21 06:56:08.099: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:56:08.106: INFO: Number of nodes with available pods: 1
Aug 21 06:56:08.106: INFO: Node kali-worker is running more than one daemon pod
Aug 21 06:56:09.100: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:56:09.109: INFO: Number of nodes with available pods: 1
Aug 21 06:56:09.109: INFO: Node kali-worker is running more than one daemon pod
Aug 21 06:56:10.100: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:56:10.107: INFO: Number of nodes with available pods: 1
Aug 21 06:56:10.107: INFO: Node kali-worker is running more than one daemon pod
Aug 21 06:56:11.100: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:56:11.107: INFO: Number of nodes with available pods: 1
Aug 21 06:56:11.107: INFO: Node kali-worker is running more than one daemon pod
Aug 21 06:56:12.099: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:56:12.105: INFO: Number of nodes with available pods: 1
Aug 21 06:56:12.105: INFO: Node kali-worker is running more than one daemon pod
Aug 21 06:56:13.098: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:56:13.105: INFO: Number of nodes with available pods: 1
Aug 21 06:56:13.105: INFO: Node kali-worker is running more than one daemon pod
Aug 21 06:56:14.098: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:56:14.106: INFO: Number of nodes with available pods: 1
Aug 21 06:56:14.106: INFO: Node kali-worker is running more than one daemon pod
Aug 21 06:56:15.101: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:56:15.107: INFO: Number of nodes with available pods: 1
Aug 21 06:56:15.107: INFO: Node kali-worker is running more than one daemon pod
Aug 21 06:56:16.100: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:56:16.106: INFO: Number of nodes with available pods: 1
Aug 21 06:56:16.106: INFO: Node kali-worker is running more than one daemon pod
Aug 21 06:56:17.101: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:56:17.108: INFO: Number of nodes with available pods: 1
Aug 21 06:56:17.108: INFO: Node kali-worker is running more than one daemon pod
Aug 21 06:56:18.098: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:56:18.106: INFO: Number of nodes with available pods: 1
Aug 21 06:56:18.106: INFO: Node kali-worker is running more than one daemon pod
Aug 21 06:56:19.100: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:56:19.106: INFO: Number of nodes with available pods: 1
Aug 21 06:56:19.107: INFO: Node kali-worker is running more than one daemon pod
Aug 21 06:56:20.099: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:56:20.105: INFO: Number of nodes with available pods: 1
Aug 21 06:56:20.105: INFO: Node kali-worker is running more than one daemon pod
Aug 21 06:56:21.101: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:56:21.107: INFO: Number of nodes with available pods: 1
Aug 21 06:56:21.107: INFO: Node kali-worker is running more than one daemon pod
Aug 21 06:56:22.097: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:56:22.104: INFO: Number of nodes with available pods: 1
Aug 21 06:56:22.105: INFO: Node kali-worker is running more than one daemon pod
Aug 21 06:56:23.099: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 06:56:23.106: INFO: Number of nodes with available pods: 2
Aug 21 06:56:23.106: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-798, will wait for the garbage collector to delete the pods
Aug 21 06:56:23.177: INFO: Deleting DaemonSet.extensions daemon-set took: 9.404242ms
Aug 21 06:56:23.578: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.981744ms
Aug 21 06:56:29.200: INFO: Number of nodes with available pods: 0
Aug 21 06:56:29.200: INFO: Number of running nodes: 0, number of available pods: 0
Aug 21 06:56:29.205: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-798/daemonsets","resourceVersion":"2029962"},"items":null}

Aug 21 06:56:29.210: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-798/pods","resourceVersion":"2029962"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:56:29.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-798" for this suite.

• [SLOW TEST:28.432 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":188,"skipped":3288,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:56:29.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service nodeport-test with type=NodePort in namespace services-1795
STEP: creating replication controller nodeport-test in namespace services-1795
I0821 06:56:29.398772      10 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-1795, replica count: 2
I0821 06:56:32.450194      10 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0821 06:56:35.451097      10 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 21 06:56:35.451: INFO: Creating new exec pod
Aug 21 06:56:40.504: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=services-1795 execpodb5jkx -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Aug 21 06:56:41.863: INFO: stderr: "I0821 06:56:41.759104    3383 log.go:172] (0x28b2150) (0x28b2230) Create stream\nI0821 06:56:41.763259    3383 log.go:172] (0x28b2150) (0x28b2230) Stream added, broadcasting: 1\nI0821 06:56:41.776503    3383 log.go:172] (0x28b2150) Reply frame received for 1\nI0821 06:56:41.777048    3383 log.go:172] (0x28b2150) (0x2f9a070) Create stream\nI0821 06:56:41.777127    3383 log.go:172] (0x28b2150) (0x2f9a070) Stream added, broadcasting: 3\nI0821 06:56:41.778383    3383 log.go:172] (0x28b2150) Reply frame received for 3\nI0821 06:56:41.778620    3383 log.go:172] (0x28b2150) (0x2a22850) Create stream\nI0821 06:56:41.778697    3383 log.go:172] (0x28b2150) (0x2a22850) Stream added, broadcasting: 5\nI0821 06:56:41.779659    3383 log.go:172] (0x28b2150) Reply frame received for 5\nI0821 06:56:41.841964    3383 log.go:172] (0x28b2150) Data frame received for 3\nI0821 06:56:41.842194    3383 log.go:172] (0x2f9a070) (3) Data frame handling\nI0821 06:56:41.842419    3383 log.go:172] (0x28b2150) Data frame received for 5\nI0821 06:56:41.842538    3383 log.go:172] (0x2a22850) (5) Data frame handling\nI0821 06:56:41.843029    3383 log.go:172] (0x28b2150) Data frame received for 1\nI0821 06:56:41.843167    3383 log.go:172] (0x28b2230) (1) Data frame handling\nI0821 06:56:41.843651    3383 log.go:172] (0x2a22850) (5) Data frame sent\nI0821 06:56:41.843833    3383 log.go:172] (0x28b2150) Data frame received for 5\n+ nc -zv -t -w 2 nodeport-test 80\nI0821 06:56:41.843954    3383 log.go:172] (0x2a22850) (5) Data frame handling\nI0821 06:56:41.844138    3383 log.go:172] (0x28b2230) (1) Data frame sent\nI0821 06:56:41.845459    3383 log.go:172] (0x28b2150) (0x28b2230) Stream removed, broadcasting: 1\nI0821 06:56:41.845715    3383 log.go:172] (0x2a22850) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0821 06:56:41.845825    3383 log.go:172] (0x28b2150) Data frame received for 5\nI0821 06:56:41.846806    3383 log.go:172] (0x2a22850) (5) Data frame handling\nI0821 06:56:41.848564    3383 log.go:172] (0x28b2150) Go away received\nI0821 06:56:41.851005    3383 log.go:172] (0x28b2150) (0x28b2230) Stream removed, broadcasting: 1\nI0821 06:56:41.851353    3383 log.go:172] (0x28b2150) (0x2f9a070) Stream removed, broadcasting: 3\nI0821 06:56:41.851560    3383 log.go:172] (0x28b2150) (0x2a22850) Stream removed, broadcasting: 5\n"
Aug 21 06:56:41.864: INFO: stdout: ""
Aug 21 06:56:41.869: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=services-1795 execpodb5jkx -- /bin/sh -x -c nc -zv -t -w 2 10.104.249.164 80'
Aug 21 06:56:43.258: INFO: stderr: "I0821 06:56:43.142619    3405 log.go:172] (0x28a0a10) (0x28a1490) Create stream\nI0821 06:56:43.144657    3405 log.go:172] (0x28a0a10) (0x28a1490) Stream added, broadcasting: 1\nI0821 06:56:43.159381    3405 log.go:172] (0x28a0a10) Reply frame received for 1\nI0821 06:56:43.159962    3405 log.go:172] (0x28a0a10) (0x3116070) Create stream\nI0821 06:56:43.160039    3405 log.go:172] (0x28a0a10) (0x3116070) Stream added, broadcasting: 3\nI0821 06:56:43.161555    3405 log.go:172] (0x28a0a10) Reply frame received for 3\nI0821 06:56:43.161787    3405 log.go:172] (0x28a0a10) (0x31162a0) Create stream\nI0821 06:56:43.161853    3405 log.go:172] (0x28a0a10) (0x31162a0) Stream added, broadcasting: 5\nI0821 06:56:43.162991    3405 log.go:172] (0x28a0a10) Reply frame received for 5\nI0821 06:56:43.236501    3405 log.go:172] (0x28a0a10) Data frame received for 3\nI0821 06:56:43.236898    3405 log.go:172] (0x28a0a10) Data frame received for 1\nI0821 06:56:43.237108    3405 log.go:172] (0x28a1490) (1) Data frame handling\nI0821 06:56:43.237309    3405 log.go:172] (0x28a0a10) Data frame received for 5\nI0821 06:56:43.237574    3405 log.go:172] (0x31162a0) (5) Data frame handling\nI0821 06:56:43.237845    3405 log.go:172] (0x3116070) (3) Data frame handling\nI0821 06:56:43.238020    3405 log.go:172] (0x28a1490) (1) Data frame sent\nI0821 06:56:43.238539    3405 log.go:172] (0x31162a0) (5) Data frame sent\nI0821 06:56:43.238721    3405 log.go:172] (0x28a0a10) Data frame received for 5\nI0821 06:56:43.238834    3405 log.go:172] (0x31162a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.104.249.164 80\nConnection to 10.104.249.164 80 port [tcp/http] succeeded!\nI0821 06:56:43.240837    3405 log.go:172] (0x28a0a10) (0x28a1490) Stream removed, broadcasting: 1\nI0821 06:56:43.242905    3405 log.go:172] (0x28a0a10) Go away received\nI0821 06:56:43.245975    3405 log.go:172] (0x28a0a10) (0x28a1490) Stream removed, broadcasting: 1\nI0821 06:56:43.246179    3405 log.go:172] (0x28a0a10) (0x3116070) Stream removed, broadcasting: 3\nI0821 06:56:43.246348    3405 log.go:172] (0x28a0a10) (0x31162a0) Stream removed, broadcasting: 5\n"
Aug 21 06:56:43.259: INFO: stdout: ""
Aug 21 06:56:43.259: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=services-1795 execpodb5jkx -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.16 32436'
Aug 21 06:56:44.650: INFO: stderr: "I0821 06:56:44.520153    3427 log.go:172] (0x28a6620) (0x28a6770) Create stream\nI0821 06:56:44.521963    3427 log.go:172] (0x28a6620) (0x28a6770) Stream added, broadcasting: 1\nI0821 06:56:44.534743    3427 log.go:172] (0x28a6620) Reply frame received for 1\nI0821 06:56:44.535336    3427 log.go:172] (0x28a6620) (0x30bc0e0) Create stream\nI0821 06:56:44.535407    3427 log.go:172] (0x28a6620) (0x30bc0e0) Stream added, broadcasting: 3\nI0821 06:56:44.536872    3427 log.go:172] (0x28a6620) Reply frame received for 3\nI0821 06:56:44.537132    3427 log.go:172] (0x28a6620) (0x30bc310) Create stream\nI0821 06:56:44.537199    3427 log.go:172] (0x28a6620) (0x30bc310) Stream added, broadcasting: 5\nI0821 06:56:44.538478    3427 log.go:172] (0x28a6620) Reply frame received for 5\nI0821 06:56:44.632254    3427 log.go:172] (0x28a6620) Data frame received for 3\nI0821 06:56:44.632560    3427 log.go:172] (0x28a6620) Data frame received for 1\nI0821 06:56:44.633025    3427 log.go:172] (0x28a6620) Data frame received for 5\nI0821 06:56:44.633194    3427 log.go:172] (0x30bc0e0) (3) Data frame handling\nI0821 06:56:44.633369    3427 log.go:172] (0x30bc310) (5) Data frame handling\nI0821 06:56:44.633558    3427 log.go:172] (0x28a6770) (1) Data frame handling\n+ nc -zv -t -w 2 172.18.0.16 32436\nConnection to 172.18.0.16 32436 port [tcp/32436] succeeded!\nI0821 06:56:44.635692    3427 log.go:172] (0x28a6770) (1) Data frame sent\nI0821 06:56:44.635972    3427 log.go:172] (0x30bc310) (5) Data frame sent\nI0821 06:56:44.636546    3427 log.go:172] (0x28a6620) Data frame received for 5\nI0821 06:56:44.636668    3427 log.go:172] (0x30bc310) (5) Data frame handling\nI0821 06:56:44.638615    3427 log.go:172] (0x28a6620) (0x28a6770) Stream removed, broadcasting: 1\nI0821 06:56:44.639254    3427 log.go:172] (0x28a6620) Go away received\nI0821 06:56:44.641668    3427 log.go:172] (0x28a6620) (0x28a6770) Stream removed, broadcasting: 1\nI0821 06:56:44.641862    3427 log.go:172] (0x28a6620) (0x30bc0e0) Stream removed, broadcasting: 3\nI0821 06:56:44.642012    3427 log.go:172] (0x28a6620) (0x30bc310) Stream removed, broadcasting: 5\n"
Aug 21 06:56:44.652: INFO: stdout: ""
Aug 21 06:56:44.652: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=services-1795 execpodb5jkx -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 32436'
Aug 21 06:56:46.021: INFO: stderr: "I0821 06:56:45.908262    3452 log.go:172] (0x2cfa070) (0x2cfa0e0) Create stream\nI0821 06:56:45.910035    3452 log.go:172] (0x2cfa070) (0x2cfa0e0) Stream added, broadcasting: 1\nI0821 06:56:45.923190    3452 log.go:172] (0x2cfa070) Reply frame received for 1\nI0821 06:56:45.923747    3452 log.go:172] (0x2cfa070) (0x29be620) Create stream\nI0821 06:56:45.923841    3452 log.go:172] (0x2cfa070) (0x29be620) Stream added, broadcasting: 3\nI0821 06:56:45.925137    3452 log.go:172] (0x2cfa070) Reply frame received for 3\nI0821 06:56:45.925480    3452 log.go:172] (0x2cfa070) (0x2c16af0) Create stream\nI0821 06:56:45.925604    3452 log.go:172] (0x2cfa070) (0x2c16af0) Stream added, broadcasting: 5\nI0821 06:56:45.926839    3452 log.go:172] (0x2cfa070) Reply frame received for 5\nI0821 06:56:45.999711    3452 log.go:172] (0x2cfa070) Data frame received for 3\nI0821 06:56:46.000110    3452 log.go:172] (0x29be620) (3) Data frame handling\nI0821 06:56:46.000460    3452 log.go:172] (0x2cfa070) Data frame received for 5\nI0821 06:56:46.000591    3452 log.go:172] (0x2c16af0) (5) Data frame handling\nI0821 06:56:46.000909    3452 log.go:172] (0x2cfa070) Data frame received for 1\nI0821 06:56:46.001120    3452 log.go:172] (0x2cfa0e0) (1) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 32436\nConnection to 172.18.0.13 32436 port [tcp/32436] succeeded!\nI0821 06:56:46.002443    3452 log.go:172] (0x2cfa0e0) (1) Data frame sent\nI0821 06:56:46.003271    3452 log.go:172] (0x2c16af0) (5) Data frame sent\nI0821 06:56:46.003370    3452 log.go:172] (0x2cfa070) Data frame received for 5\nI0821 06:56:46.003599    3452 log.go:172] (0x2cfa070) (0x2cfa0e0) Stream removed, broadcasting: 1\nI0821 06:56:46.004883    3452 log.go:172] (0x2c16af0) (5) Data frame handling\nI0821 06:56:46.005885    3452 log.go:172] (0x2cfa070) Go away received\nI0821 06:56:46.009008    3452 log.go:172] (0x2cfa070) (0x2cfa0e0) Stream removed, broadcasting: 1\nI0821 06:56:46.009226    3452 log.go:172] (0x2cfa070) (0x29be620) Stream removed, broadcasting: 3\nI0821 06:56:46.009443    3452 log.go:172] (0x2cfa070) (0x2c16af0) Stream removed, broadcasting: 5\n"
Aug 21 06:56:46.022: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:56:46.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1795" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:16.790 seconds]
[sig-network] Services
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":189,"skipped":3297,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:56:46.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Aug 21 06:56:46.139: INFO: >>> kubeConfig: /root/.kube/config
Aug 21 06:57:05.055: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:58:11.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1452" for this suite.

• [SLOW TEST:85.159 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":190,"skipped":3304,"failed":0}
SSSSSSSS
------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] LimitRange
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:58:11.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename limitrange
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a LimitRange
STEP: Setting up watch
STEP: Submitting a LimitRange
Aug 21 06:58:11.331: INFO: observed the limitRanges list
STEP: Verifying LimitRange creation was observed
STEP: Fetching the LimitRange to ensure it has proper values
Aug 21 06:58:11.363: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
Aug 21 06:58:11.364: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with no resource requirements
STEP: Ensuring Pod has resource requirements applied from LimitRange
Aug 21 06:58:11.385: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
Aug 21 06:58:11.386: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with partial resource requirements
STEP: Ensuring Pod has merged resource requirements applied from LimitRange
Aug 21 06:58:11.406: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}]
Aug 21 06:58:11.406: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Failing to create a Pod with less than min resources
STEP: Failing to create a Pod with more than max resources
STEP: Updating a LimitRange
STEP: Verifying LimitRange updating is effective
STEP: Creating a Pod with less than former min resources
STEP: Failing to create a Pod with more than max resources
STEP: Deleting a LimitRange
STEP: Verifying the LimitRange was deleted
Aug 21 06:58:18.950: INFO: limitRange is already deleted
STEP: Creating a Pod with more than former max resources
[AfterEach] [sig-scheduling] LimitRange
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:58:18.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "limitrange-6952" for this suite.

• [SLOW TEST:7.824 seconds]
[sig-scheduling] LimitRange
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":191,"skipped":3312,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:58:19.026: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:58:23.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8624" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":192,"skipped":3323,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:58:23.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:58:30.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1930" for this suite.

• [SLOW TEST:7.101 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":193,"skipped":3372,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:58:30.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0821 06:58:40.495341      10 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 21 06:58:40.495: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:58:40.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4733" for this suite.

• [SLOW TEST:10.173 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":194,"skipped":3383,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:58:40.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:58:40.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-676" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":195,"skipped":3391,"failed":0}
S
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:58:40.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 21 06:58:40.805: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5724b598-8ea4-4041-b2f2-5506942cd6a3" in namespace "downward-api-6923" to be "Succeeded or Failed"
Aug 21 06:58:40.814: INFO: Pod "downwardapi-volume-5724b598-8ea4-4041-b2f2-5506942cd6a3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.444977ms
Aug 21 06:58:42.821: INFO: Pod "downwardapi-volume-5724b598-8ea4-4041-b2f2-5506942cd6a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015699551s
Aug 21 06:58:44.829: INFO: Pod "downwardapi-volume-5724b598-8ea4-4041-b2f2-5506942cd6a3": Phase="Running", Reason="", readiness=true. Elapsed: 4.023142294s
Aug 21 06:58:46.835: INFO: Pod "downwardapi-volume-5724b598-8ea4-4041-b2f2-5506942cd6a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029506993s
STEP: Saw pod success
Aug 21 06:58:46.835: INFO: Pod "downwardapi-volume-5724b598-8ea4-4041-b2f2-5506942cd6a3" satisfied condition "Succeeded or Failed"
Aug 21 06:58:46.840: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-5724b598-8ea4-4041-b2f2-5506942cd6a3 container client-container: 
STEP: delete the pod
Aug 21 06:58:46.874: INFO: Waiting for pod downwardapi-volume-5724b598-8ea4-4041-b2f2-5506942cd6a3 to disappear
Aug 21 06:58:46.908: INFO: Pod downwardapi-volume-5724b598-8ea4-4041-b2f2-5506942cd6a3 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:58:46.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6923" for this suite.

• [SLOW TEST:6.230 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":196,"skipped":3392,"failed":0}
SSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:58:46.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Aug 21 06:58:51.102: INFO: &Pod{ObjectMeta:{send-events-14546a72-0a87-4226-aa5c-1cf32a929398  events-9391 /api/v1/namespaces/events-9391/pods/send-events-14546a72-0a87-4226-aa5c-1cf32a929398 3f455b62-7cc1-4618-92a3-4781aeabbf32 2030700 0 2020-08-21 06:58:47 +0000 UTC   map[name:foo time:37797403] map[] [] []  [{e2e.test Update v1 2020-08-21 06:58:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 116 105 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 112 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 114 116 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 99 111 110 116 97 105 110 101 114 80 111 114 116 92 34 58 56 48 44 92 34 112 114 111 116 111 99 111 108 92 34 58 92 34 84 67 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 80 111 114 116 34 58 123 125 44 34 102 58 112 114 111 116 111 99 111 108 34 58 123 125 125 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 06:58:50 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 52 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bnp25,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bnp25,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bnp25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:58:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:58:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:58:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 06:58:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.242,StartTime:2020-08-21 06:58:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 06:58:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://f86ef19e23da6cc47842e9bc5717a118b28bd720f20fbb1683f071864c4e6fa3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.242,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Aug 21 06:58:53.117: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Aug 21 06:58:55.126: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:58:55.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-9391" for this suite.

• [SLOW TEST:8.277 seconds]
[k8s.io] [sig-node] Events
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":275,"completed":197,"skipped":3395,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:58:55.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 21 06:58:55.284: INFO: Waiting up to 5m0s for pod "downwardapi-volume-22314e92-24ef-49de-bf18-d53d1f0e7405" in namespace "projected-847" to be "Succeeded or Failed"
Aug 21 06:58:55.298: INFO: Pod "downwardapi-volume-22314e92-24ef-49de-bf18-d53d1f0e7405": Phase="Pending", Reason="", readiness=false. Elapsed: 14.245552ms
Aug 21 06:58:57.306: INFO: Pod "downwardapi-volume-22314e92-24ef-49de-bf18-d53d1f0e7405": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021599769s
Aug 21 06:58:59.311: INFO: Pod "downwardapi-volume-22314e92-24ef-49de-bf18-d53d1f0e7405": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027452744s
STEP: Saw pod success
Aug 21 06:58:59.312: INFO: Pod "downwardapi-volume-22314e92-24ef-49de-bf18-d53d1f0e7405" satisfied condition "Succeeded or Failed"
Aug 21 06:58:59.316: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-22314e92-24ef-49de-bf18-d53d1f0e7405 container client-container: 
STEP: delete the pod
Aug 21 06:58:59.353: INFO: Waiting for pod downwardapi-volume-22314e92-24ef-49de-bf18-d53d1f0e7405 to disappear
Aug 21 06:58:59.364: INFO: Pod downwardapi-volume-22314e92-24ef-49de-bf18-d53d1f0e7405 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:58:59.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-847" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":198,"skipped":3407,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:58:59.379: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 21 06:58:59.494: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7dd702f3-7dd4-4390-a08b-048ad0ea4368" in namespace "projected-6022" to be "Succeeded or Failed"
Aug 21 06:58:59.503: INFO: Pod "downwardapi-volume-7dd702f3-7dd4-4390-a08b-048ad0ea4368": Phase="Pending", Reason="", readiness=false. Elapsed: 8.356609ms
Aug 21 06:59:01.509: INFO: Pod "downwardapi-volume-7dd702f3-7dd4-4390-a08b-048ad0ea4368": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015094523s
Aug 21 06:59:03.517: INFO: Pod "downwardapi-volume-7dd702f3-7dd4-4390-a08b-048ad0ea4368": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023046925s
STEP: Saw pod success
Aug 21 06:59:03.518: INFO: Pod "downwardapi-volume-7dd702f3-7dd4-4390-a08b-048ad0ea4368" satisfied condition "Succeeded or Failed"
Aug 21 06:59:03.523: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-7dd702f3-7dd4-4390-a08b-048ad0ea4368 container client-container: 
STEP: delete the pod
Aug 21 06:59:03.548: INFO: Waiting for pod downwardapi-volume-7dd702f3-7dd4-4390-a08b-048ad0ea4368 to disappear
Aug 21 06:59:03.559: INFO: Pod downwardapi-volume-7dd702f3-7dd4-4390-a08b-048ad0ea4368 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:59:03.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6022" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":199,"skipped":3459,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:59:03.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl logs
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288
STEP: creating an pod
Aug 21 06:59:03.661: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-5215 -- logs-generator --log-lines-total 100 --run-duration 20s'
Aug 21 06:59:04.874: INFO: stderr: ""
Aug 21 06:59:04.875: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Waiting for log generator to start.
Aug 21 06:59:04.875: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Aug 21 06:59:04.875: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-5215" to be "running and ready, or succeeded"
Aug 21 06:59:04.882: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.772525ms
Aug 21 06:59:06.889: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014053322s
Aug 21 06:59:08.901: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.025814515s
Aug 21 06:59:08.901: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Aug 21 06:59:08.902: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Aug 21 06:59:08.902: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5215'
Aug 21 06:59:10.063: INFO: stderr: ""
Aug 21 06:59:10.063: INFO: stdout: "I0821 06:59:07.226556       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/wlz 500\nI0821 06:59:07.426713       1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/khv2 383\nI0821 06:59:07.626748       1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/h24 547\nI0821 06:59:07.826793       1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/gwp 553\nI0821 06:59:08.026728       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/llw 542\nI0821 06:59:08.226834       1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/vhv7 493\nI0821 06:59:08.426703       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/cb2j 201\nI0821 06:59:08.626773       1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/ttp 386\nI0821 06:59:08.826751       1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/rx7 535\nI0821 06:59:09.026733       1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/jgw 597\nI0821 06:59:09.226714       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/qbk 441\nI0821 06:59:09.426694       1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/s49 285\nI0821 06:59:09.626775       1 logs_generator.go:76] 12 POST /api/v1/namespaces/ns/pods/x6w 471\nI0821 06:59:09.826707       1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/6pv 592\nI0821 06:59:10.026793       1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/hwn 202\n"
STEP: limiting log lines
Aug 21 06:59:10.064: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5215 --tail=1'
Aug 21 06:59:11.206: INFO: stderr: ""
Aug 21 06:59:11.207: INFO: stdout: "I0821 06:59:11.026736       1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/wrfh 353\n"
Aug 21 06:59:11.207: INFO: got output "I0821 06:59:11.026736       1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/wrfh 353\n"
STEP: limiting log bytes
Aug 21 06:59:11.208: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5215 --limit-bytes=1'
Aug 21 06:59:12.378: INFO: stderr: ""
Aug 21 06:59:12.379: INFO: stdout: "I"
Aug 21 06:59:12.379: INFO: got output "I"
STEP: exposing timestamps
Aug 21 06:59:12.380: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5215 --tail=1 --timestamps'
Aug 21 06:59:13.537: INFO: stderr: ""
Aug 21 06:59:13.537: INFO: stdout: "2020-08-21T06:59:13.426870994Z I0821 06:59:13.426707       1 logs_generator.go:76] 31 PUT /api/v1/namespaces/default/pods/qs7 561\n"
Aug 21 06:59:13.537: INFO: got output "2020-08-21T06:59:13.426870994Z I0821 06:59:13.426707       1 logs_generator.go:76] 31 PUT /api/v1/namespaces/default/pods/qs7 561\n"
STEP: restricting to a time range
Aug 21 06:59:16.039: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5215 --since=1s'
Aug 21 06:59:17.209: INFO: stderr: ""
Aug 21 06:59:17.209: INFO: stdout: "I0821 06:59:16.226752       1 logs_generator.go:76] 45 POST /api/v1/namespaces/kube-system/pods/fkch 513\nI0821 06:59:16.426733       1 logs_generator.go:76] 46 POST /api/v1/namespaces/kube-system/pods/wpdg 475\nI0821 06:59:16.626667       1 logs_generator.go:76] 47 PUT /api/v1/namespaces/ns/pods/q4n 304\nI0821 06:59:16.826670       1 logs_generator.go:76] 48 GET /api/v1/namespaces/ns/pods/dn2d 384\nI0821 06:59:17.026802       1 logs_generator.go:76] 49 POST /api/v1/namespaces/default/pods/qp4v 458\n"
Aug 21 06:59:17.210: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5215 --since=24h'
Aug 21 06:59:18.342: INFO: stderr: ""
Aug 21 06:59:18.343: INFO: stdout: "I0821 06:59:07.226556       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/wlz 500\nI0821 06:59:07.426713       1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/khv2 383\nI0821 06:59:07.626748       1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/h24 547\nI0821 06:59:07.826793       1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/gwp 553\nI0821 06:59:08.026728       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/llw 542\nI0821 06:59:08.226834       1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/vhv7 493\nI0821 06:59:08.426703       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/cb2j 201\nI0821 06:59:08.626773       1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/ttp 386\nI0821 06:59:08.826751       1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/rx7 535\nI0821 06:59:09.026733       1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/jgw 597\nI0821 06:59:09.226714       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/qbk 441\nI0821 06:59:09.426694       1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/s49 285\nI0821 06:59:09.626775       1 logs_generator.go:76] 12 POST /api/v1/namespaces/ns/pods/x6w 471\nI0821 06:59:09.826707       1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/6pv 592\nI0821 06:59:10.026793       1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/hwn 202\nI0821 06:59:10.226719       1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/nrg 590\nI0821 06:59:10.426718       1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/r2c5 203\nI0821 06:59:10.626747       1 logs_generator.go:76] 17 POST /api/v1/namespaces/kube-system/pods/b8tm 589\nI0821 06:59:10.826761       1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/4ggl 224\nI0821 06:59:11.026736       1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/wrfh 353\nI0821 06:59:11.226693       1 logs_generator.go:76] 20 POST /api/v1/namespaces/kube-system/pods/2wb 419\nI0821 06:59:11.426735       1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/trw 589\nI0821 06:59:11.626716       1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/j2f 441\nI0821 06:59:11.826757       1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/drh9 400\nI0821 06:59:12.026750       1 logs_generator.go:76] 24 POST /api/v1/namespaces/ns/pods/wcsh 213\nI0821 06:59:12.226748       1 logs_generator.go:76] 25 POST /api/v1/namespaces/default/pods/xk5 500\nI0821 06:59:12.426681       1 logs_generator.go:76] 26 PUT /api/v1/namespaces/default/pods/wtq7 323\nI0821 06:59:12.626739       1 logs_generator.go:76] 27 GET /api/v1/namespaces/kube-system/pods/xz45 458\nI0821 06:59:12.826678       1 logs_generator.go:76] 28 POST /api/v1/namespaces/ns/pods/4t8s 463\nI0821 06:59:13.026686       1 logs_generator.go:76] 29 GET /api/v1/namespaces/kube-system/pods/r59 550\nI0821 06:59:13.226752       1 logs_generator.go:76] 30 PUT /api/v1/namespaces/default/pods/nb8x 234\nI0821 06:59:13.426707       1 logs_generator.go:76] 31 PUT /api/v1/namespaces/default/pods/qs7 561\nI0821 06:59:13.626778       1 logs_generator.go:76] 32 POST /api/v1/namespaces/default/pods/stb 309\nI0821 06:59:13.826745       1 logs_generator.go:76] 33 PUT /api/v1/namespaces/default/pods/sphd 544\nI0821 06:59:14.026720       1 logs_generator.go:76] 34 POST /api/v1/namespaces/ns/pods/xp5l 295\nI0821 06:59:14.226707       1 logs_generator.go:76] 35 POST /api/v1/namespaces/default/pods/tng 552\nI0821 06:59:14.426761       1 logs_generator.go:76] 36 PUT /api/v1/namespaces/default/pods/ptxv 420\nI0821 06:59:14.626792       1 logs_generator.go:76] 37 PUT /api/v1/namespaces/ns/pods/zsk 540\nI0821 06:59:14.826758       1 logs_generator.go:76] 38 PUT /api/v1/namespaces/kube-system/pods/xml 582\nI0821 06:59:15.026775       1 logs_generator.go:76] 39 GET /api/v1/namespaces/default/pods/2l5j 594\nI0821 06:59:15.226757       1 logs_generator.go:76] 40 POST /api/v1/namespaces/kube-system/pods/2q2 212\nI0821 06:59:15.426775       1 logs_generator.go:76] 41 PUT /api/v1/namespaces/kube-system/pods/tzv 205\nI0821 06:59:15.626746       1 logs_generator.go:76] 42 POST /api/v1/namespaces/ns/pods/m2b 357\nI0821 06:59:15.826670       1 logs_generator.go:76] 43 PUT /api/v1/namespaces/ns/pods/s5sk 271\nI0821 06:59:16.026748       1 logs_generator.go:76] 44 PUT /api/v1/namespaces/kube-system/pods/rmkv 277\nI0821 06:59:16.226752       1 logs_generator.go:76] 45 POST /api/v1/namespaces/kube-system/pods/fkch 513\nI0821 06:59:16.426733       1 logs_generator.go:76] 46 POST /api/v1/namespaces/kube-system/pods/wpdg 475\nI0821 06:59:16.626667       1 logs_generator.go:76] 47 PUT /api/v1/namespaces/ns/pods/q4n 304\nI0821 06:59:16.826670       1 logs_generator.go:76] 48 GET /api/v1/namespaces/ns/pods/dn2d 384\nI0821 06:59:17.026802       1 logs_generator.go:76] 49 POST /api/v1/namespaces/default/pods/qp4v 458\nI0821 06:59:17.226739       1 logs_generator.go:76] 50 POST /api/v1/namespaces/ns/pods/rsf 597\nI0821 06:59:17.426699       1 logs_generator.go:76] 51 POST /api/v1/namespaces/ns/pods/tf9d 274\nI0821 06:59:17.626712       1 logs_generator.go:76] 52 GET /api/v1/namespaces/default/pods/42fw 483\nI0821 06:59:17.826760       1 logs_generator.go:76] 53 POST /api/v1/namespaces/kube-system/pods/xpj 522\nI0821 06:59:18.026780       1 logs_generator.go:76] 54 PUT /api/v1/namespaces/kube-system/pods/jl84 402\nI0821 06:59:18.226731       1 logs_generator.go:76] 55 GET /api/v1/namespaces/default/pods/wzz 347\n"
[AfterEach] Kubectl logs
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294
Aug 21 06:59:18.347: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-5215'
Aug 21 06:59:29.125: INFO: stderr: ""
Aug 21 06:59:29.125: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:59:29.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5215" for this suite.

• [SLOW TEST:25.566 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284
    should be able to retrieve and filter logs  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":275,"completed":200,"skipped":3485,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:59:29.146: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:59:34.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9847" for this suite.

• [SLOW TEST:5.767 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":201,"skipped":3539,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:59:34.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override arguments
Aug 21 06:59:35.043: INFO: Waiting up to 5m0s for pod "client-containers-ed248572-bb0b-40e3-962f-3430f0648016" in namespace "containers-5110" to be "Succeeded or Failed"
Aug 21 06:59:35.114: INFO: Pod "client-containers-ed248572-bb0b-40e3-962f-3430f0648016": Phase="Pending", Reason="", readiness=false. Elapsed: 71.295972ms
Aug 21 06:59:37.124: INFO: Pod "client-containers-ed248572-bb0b-40e3-962f-3430f0648016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080684429s
Aug 21 06:59:39.130: INFO: Pod "client-containers-ed248572-bb0b-40e3-962f-3430f0648016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.087378881s
STEP: Saw pod success
Aug 21 06:59:39.130: INFO: Pod "client-containers-ed248572-bb0b-40e3-962f-3430f0648016" satisfied condition "Succeeded or Failed"
Aug 21 06:59:39.135: INFO: Trying to get logs from node kali-worker2 pod client-containers-ed248572-bb0b-40e3-962f-3430f0648016 container test-container: 
STEP: delete the pod
Aug 21 06:59:39.176: INFO: Waiting for pod client-containers-ed248572-bb0b-40e3-962f-3430f0648016 to disappear
Aug 21 06:59:39.186: INFO: Pod client-containers-ed248572-bb0b-40e3-962f-3430f0648016 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:59:39.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5110" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":202,"skipped":3548,"failed":0}
S
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:59:39.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 06:59:39.321: INFO: Waiting up to 5m0s for pod "busybox-user-65534-d27349b9-0f1e-477d-9f7d-41357a46d2ff" in namespace "security-context-test-935" to be "Succeeded or Failed"
Aug 21 06:59:39.331: INFO: Pod "busybox-user-65534-d27349b9-0f1e-477d-9f7d-41357a46d2ff": Phase="Pending", Reason="", readiness=false. Elapsed: 10.107467ms
Aug 21 06:59:41.346: INFO: Pod "busybox-user-65534-d27349b9-0f1e-477d-9f7d-41357a46d2ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025098372s
Aug 21 06:59:43.353: INFO: Pod "busybox-user-65534-d27349b9-0f1e-477d-9f7d-41357a46d2ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032024541s
Aug 21 06:59:43.353: INFO: Pod "busybox-user-65534-d27349b9-0f1e-477d-9f7d-41357a46d2ff" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:59:43.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-935" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":203,"skipped":3549,"failed":0}

------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:59:43.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 06:59:43.687: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-80c22890-a7ce-42dc-8694-2a43ae4efe42" in namespace "security-context-test-2733" to be "Succeeded or Failed"
Aug 21 06:59:43.743: INFO: Pod "busybox-privileged-false-80c22890-a7ce-42dc-8694-2a43ae4efe42": Phase="Pending", Reason="", readiness=false. Elapsed: 55.680291ms
Aug 21 06:59:45.874: INFO: Pod "busybox-privileged-false-80c22890-a7ce-42dc-8694-2a43ae4efe42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.187101682s
Aug 21 06:59:47.883: INFO: Pod "busybox-privileged-false-80c22890-a7ce-42dc-8694-2a43ae4efe42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.195179877s
Aug 21 06:59:47.883: INFO: Pod "busybox-privileged-false-80c22890-a7ce-42dc-8694-2a43ae4efe42" satisfied condition "Succeeded or Failed"
Aug 21 06:59:47.891: INFO: Got logs for pod "busybox-privileged-false-80c22890-a7ce-42dc-8694-2a43ae4efe42": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:59:47.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2733" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":204,"skipped":3549,"failed":0}
SS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:59:47.904: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with configMap that has name projected-configmap-test-upd-d3eabb16-5471-45ec-b44d-d0315724bac4
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-d3eabb16-5471-45ec-b44d-d0315724bac4
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 06:59:56.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4439" for this suite.

• [SLOW TEST:8.228 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":205,"skipped":3551,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 06:59:56.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-3365
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 21 06:59:56.232: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Aug 21 06:59:56.330: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 21 06:59:58.636: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 21 07:00:00.338: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 21 07:00:02.338: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 21 07:00:04.338: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 21 07:00:06.338: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 21 07:00:08.338: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 21 07:00:10.338: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 21 07:00:12.337: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 21 07:00:14.344: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 21 07:00:16.338: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 21 07:00:18.341: INFO: The status of Pod netserver-0 is Running (Ready = true)
Aug 21 07:00:18.350: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Aug 21 07:00:22.398: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.245:8080/dial?request=hostname&protocol=udp&host=10.244.2.244&port=8081&tries=1'] Namespace:pod-network-test-3365 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 07:00:22.398: INFO: >>> kubeConfig: /root/.kube/config
I0821 07:00:22.513040      10 log.go:172] (0xa68d960) (0xa68dce0) Create stream
I0821 07:00:22.513236      10 log.go:172] (0xa68d960) (0xa68dce0) Stream added, broadcasting: 1
I0821 07:00:22.518173      10 log.go:172] (0xa68d960) Reply frame received for 1
I0821 07:00:22.518409      10 log.go:172] (0xa68d960) (0x8632700) Create stream
I0821 07:00:22.518532      10 log.go:172] (0xa68d960) (0x8632700) Stream added, broadcasting: 3
I0821 07:00:22.520613      10 log.go:172] (0xa68d960) Reply frame received for 3
I0821 07:00:22.520869      10 log.go:172] (0xa68d960) (0x8632a80) Create stream
I0821 07:00:22.520967      10 log.go:172] (0xa68d960) (0x8632a80) Stream added, broadcasting: 5
I0821 07:00:22.522760      10 log.go:172] (0xa68d960) Reply frame received for 5
I0821 07:00:22.605520      10 log.go:172] (0xa68d960) Data frame received for 3
I0821 07:00:22.605764      10 log.go:172] (0x8632700) (3) Data frame handling
I0821 07:00:22.606020      10 log.go:172] (0x8632700) (3) Data frame sent
I0821 07:00:22.606386      10 log.go:172] (0xa68d960) Data frame received for 5
I0821 07:00:22.606564      10 log.go:172] (0x8632a80) (5) Data frame handling
I0821 07:00:22.606702      10 log.go:172] (0xa68d960) Data frame received for 3
I0821 07:00:22.606846      10 log.go:172] (0x8632700) (3) Data frame handling
I0821 07:00:22.608972      10 log.go:172] (0xa68d960) Data frame received for 1
I0821 07:00:22.609084      10 log.go:172] (0xa68dce0) (1) Data frame handling
I0821 07:00:22.609196      10 log.go:172] (0xa68dce0) (1) Data frame sent
I0821 07:00:22.609314      10 log.go:172] (0xa68d960) (0xa68dce0) Stream removed, broadcasting: 1
I0821 07:00:22.609463      10 log.go:172] (0xa68d960) Go away received
I0821 07:00:22.609917      10 log.go:172] (0xa68d960) (0xa68dce0) Stream removed, broadcasting: 1
I0821 07:00:22.610043      10 log.go:172] (0xa68d960) (0x8632700) Stream removed, broadcasting: 3
I0821 07:00:22.610226      10 log.go:172] (0xa68d960) (0x8632a80) Stream removed, broadcasting: 5
Aug 21 07:00:22.610: INFO: Waiting for responses: map[]
Aug 21 07:00:22.616: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.245:8080/dial?request=hostname&protocol=udp&host=10.244.1.14&port=8081&tries=1'] Namespace:pod-network-test-3365 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 07:00:22.616: INFO: >>> kubeConfig: /root/.kube/config
I0821 07:00:22.722943      10 log.go:172] (0x8ad4000) (0x8ad4070) Create stream
I0821 07:00:22.723071      10 log.go:172] (0x8ad4000) (0x8ad4070) Stream added, broadcasting: 1
I0821 07:00:22.730094      10 log.go:172] (0x8ad4000) Reply frame received for 1
I0821 07:00:22.730358      10 log.go:172] (0x8ad4000) (0xaba0540) Create stream
I0821 07:00:22.730484      10 log.go:172] (0x8ad4000) (0xaba0540) Stream added, broadcasting: 3
I0821 07:00:22.732042      10 log.go:172] (0x8ad4000) Reply frame received for 3
I0821 07:00:22.732228      10 log.go:172] (0x8ad4000) (0x98c7e30) Create stream
I0821 07:00:22.732326      10 log.go:172] (0x8ad4000) (0x98c7e30) Stream added, broadcasting: 5
I0821 07:00:22.733875      10 log.go:172] (0x8ad4000) Reply frame received for 5
I0821 07:00:22.805099      10 log.go:172] (0x8ad4000) Data frame received for 3
I0821 07:00:22.805246      10 log.go:172] (0xaba0540) (3) Data frame handling
I0821 07:00:22.805386      10 log.go:172] (0xaba0540) (3) Data frame sent
I0821 07:00:22.805503      10 log.go:172] (0x8ad4000) Data frame received for 3
I0821 07:00:22.805632      10 log.go:172] (0x8ad4000) Data frame received for 5
I0821 07:00:22.805781      10 log.go:172] (0x98c7e30) (5) Data frame handling
I0821 07:00:22.805889      10 log.go:172] (0xaba0540) (3) Data frame handling
I0821 07:00:22.807370      10 log.go:172] (0x8ad4000) Data frame received for 1
I0821 07:00:22.807616      10 log.go:172] (0x8ad4070) (1) Data frame handling
I0821 07:00:22.807769      10 log.go:172] (0x8ad4070) (1) Data frame sent
I0821 07:00:22.807905      10 log.go:172] (0x8ad4000) (0x8ad4070) Stream removed, broadcasting: 1
I0821 07:00:22.808074      10 log.go:172] (0x8ad4000) Go away received
I0821 07:00:22.808613      10 log.go:172] (0x8ad4000) (0x8ad4070) Stream removed, broadcasting: 1
I0821 07:00:22.808862      10 log.go:172] (0x8ad4000) (0xaba0540) Stream removed, broadcasting: 3
I0821 07:00:22.809036      10 log.go:172] (0x8ad4000) (0x98c7e30) Stream removed, broadcasting: 5
Aug 21 07:00:22.809: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:00:22.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3365" for this suite.

• [SLOW TEST:26.691 seconds]
[sig-network] Networking
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":206,"skipped":3562,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:00:22.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 21 07:00:22.937: INFO: Waiting up to 5m0s for pod "downwardapi-volume-196c11e9-546b-462f-a849-1001802fb94e" in namespace "projected-4533" to be "Succeeded or Failed"
Aug 21 07:00:22.999: INFO: Pod "downwardapi-volume-196c11e9-546b-462f-a849-1001802fb94e": Phase="Pending", Reason="", readiness=false. Elapsed: 61.874414ms
Aug 21 07:00:25.007: INFO: Pod "downwardapi-volume-196c11e9-546b-462f-a849-1001802fb94e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069494888s
Aug 21 07:00:27.016: INFO: Pod "downwardapi-volume-196c11e9-546b-462f-a849-1001802fb94e": Phase="Running", Reason="", readiness=true. Elapsed: 4.078238373s
Aug 21 07:00:29.023: INFO: Pod "downwardapi-volume-196c11e9-546b-462f-a849-1001802fb94e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.085650765s
STEP: Saw pod success
Aug 21 07:00:29.023: INFO: Pod "downwardapi-volume-196c11e9-546b-462f-a849-1001802fb94e" satisfied condition "Succeeded or Failed"
Aug 21 07:00:29.029: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-196c11e9-546b-462f-a849-1001802fb94e container client-container: 
STEP: delete the pod
Aug 21 07:00:29.082: INFO: Waiting for pod downwardapi-volume-196c11e9-546b-462f-a849-1001802fb94e to disappear
Aug 21 07:00:29.138: INFO: Pod downwardapi-volume-196c11e9-546b-462f-a849-1001802fb94e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:00:29.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4533" for this suite.

• [SLOW TEST:6.349 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":207,"skipped":3577,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:00:29.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 07:00:29.539: INFO: Create a RollingUpdate DaemonSet
Aug 21 07:00:29.546: INFO: Check that daemon pods launch on every node of the cluster
Aug 21 07:00:29.553: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 07:00:29.557: INFO: Number of nodes with available pods: 0
Aug 21 07:00:29.557: INFO: Node kali-worker is running more than one daemon pod
Aug 21 07:00:30.566: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 07:00:30.572: INFO: Number of nodes with available pods: 0
Aug 21 07:00:30.572: INFO: Node kali-worker is running more than one daemon pod
Aug 21 07:00:31.602: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 07:00:31.612: INFO: Number of nodes with available pods: 0
Aug 21 07:00:31.612: INFO: Node kali-worker is running more than one daemon pod
Aug 21 07:00:32.569: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 07:00:32.575: INFO: Number of nodes with available pods: 0
Aug 21 07:00:32.576: INFO: Node kali-worker is running more than one daemon pod
Aug 21 07:00:33.565: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 07:00:33.571: INFO: Number of nodes with available pods: 1
Aug 21 07:00:33.571: INFO: Node kali-worker is running more than one daemon pod
Aug 21 07:00:34.569: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 07:00:34.577: INFO: Number of nodes with available pods: 2
Aug 21 07:00:34.577: INFO: Number of running nodes: 2, number of available pods: 2
Aug 21 07:00:34.577: INFO: Update the DaemonSet to trigger a rollout
Aug 21 07:00:34.591: INFO: Updating DaemonSet daemon-set
Aug 21 07:00:49.642: INFO: Roll back the DaemonSet before rollout is complete
Aug 21 07:00:49.653: INFO: Updating DaemonSet daemon-set
Aug 21 07:00:49.653: INFO: Make sure DaemonSet rollback is complete
Aug 21 07:00:49.683: INFO: Wrong image for pod: daemon-set-ltp5t. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 21 07:00:49.683: INFO: Pod daemon-set-ltp5t is not available
Aug 21 07:00:49.698: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 07:00:50.706: INFO: Wrong image for pod: daemon-set-ltp5t. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 21 07:00:50.706: INFO: Pod daemon-set-ltp5t is not available
Aug 21 07:00:50.713: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 07:00:51.707: INFO: Wrong image for pod: daemon-set-ltp5t. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 21 07:00:51.707: INFO: Pod daemon-set-ltp5t is not available
Aug 21 07:00:51.716: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 07:00:52.708: INFO: Wrong image for pod: daemon-set-ltp5t. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 21 07:00:52.708: INFO: Pod daemon-set-ltp5t is not available
Aug 21 07:00:52.716: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 07:00:53.709: INFO: Pod daemon-set-xnzxt is not available
Aug 21 07:00:53.717: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9344, will wait for the garbage collector to delete the pods
Aug 21 07:00:53.792: INFO: Deleting DaemonSet.extensions daemon-set took: 9.044191ms
Aug 21 07:00:54.093: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.865577ms
Aug 21 07:00:57.299: INFO: Number of nodes with available pods: 0
Aug 21 07:00:57.300: INFO: Number of running nodes: 0, number of available pods: 0
Aug 21 07:00:57.304: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9344/daemonsets","resourceVersion":"2031541"},"items":null}

Aug 21 07:00:57.308: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9344/pods","resourceVersion":"2031541"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:00:57.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9344" for this suite.

• [SLOW TEST:28.158 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":208,"skipped":3594,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:00:57.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 07:00:57.452: INFO: (0) /api/v1/nodes/kali-worker2:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test substitution in container's args
Aug 21 07:00:57.676: INFO: Waiting up to 5m0s for pod "var-expansion-b1c1b58f-db94-44cf-b197-8df8a72421ff" in namespace "var-expansion-2657" to be "Succeeded or Failed"
Aug 21 07:00:57.707: INFO: Pod "var-expansion-b1c1b58f-db94-44cf-b197-8df8a72421ff": Phase="Pending", Reason="", readiness=false. Elapsed: 30.314928ms
Aug 21 07:00:59.714: INFO: Pod "var-expansion-b1c1b58f-db94-44cf-b197-8df8a72421ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037548355s
Aug 21 07:01:01.721: INFO: Pod "var-expansion-b1c1b58f-db94-44cf-b197-8df8a72421ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044974166s
STEP: Saw pod success
Aug 21 07:01:01.722: INFO: Pod "var-expansion-b1c1b58f-db94-44cf-b197-8df8a72421ff" satisfied condition "Succeeded or Failed"
Aug 21 07:01:01.726: INFO: Trying to get logs from node kali-worker2 pod var-expansion-b1c1b58f-db94-44cf-b197-8df8a72421ff container dapi-container: 
STEP: delete the pod
Aug 21 07:01:01.790: INFO: Waiting for pod var-expansion-b1c1b58f-db94-44cf-b197-8df8a72421ff to disappear
Aug 21 07:01:01.801: INFO: Pod var-expansion-b1c1b58f-db94-44cf-b197-8df8a72421ff no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:01:01.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2657" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":210,"skipped":3660,"failed":0}
SSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:01:01.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6449.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6449.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6449.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6449.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6449.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6449.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6449.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6449.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6449.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6449.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6449.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 82.137.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.137.82_udp@PTR;check="$$(dig +tcp +noall +answer +search 82.137.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.137.82_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6449.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6449.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6449.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6449.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6449.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6449.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6449.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6449.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6449.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6449.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6449.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 82.137.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.137.82_udp@PTR;check="$$(dig +tcp +noall +answer +search 82.137.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.137.82_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 21 07:01:08.352: INFO: Unable to read wheezy_udp@dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:08.357: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:08.362: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:08.367: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:08.397: INFO: Unable to read jessie_udp@dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:08.401: INFO: Unable to read jessie_tcp@dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:08.405: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:08.410: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:08.436: INFO: Lookups using dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5 failed for: [wheezy_udp@dns-test-service.dns-6449.svc.cluster.local wheezy_tcp@dns-test-service.dns-6449.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local jessie_udp@dns-test-service.dns-6449.svc.cluster.local jessie_tcp@dns-test-service.dns-6449.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local]

Aug 21 07:01:13.444: INFO: Unable to read wheezy_udp@dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:13.450: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:13.455: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:13.460: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:13.493: INFO: Unable to read jessie_udp@dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:13.498: INFO: Unable to read jessie_tcp@dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:13.502: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:13.507: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:13.536: INFO: Lookups using dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5 failed for: [wheezy_udp@dns-test-service.dns-6449.svc.cluster.local wheezy_tcp@dns-test-service.dns-6449.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local jessie_udp@dns-test-service.dns-6449.svc.cluster.local jessie_tcp@dns-test-service.dns-6449.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local]

Aug 21 07:01:18.445: INFO: Unable to read wheezy_udp@dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:18.451: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:18.456: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:18.460: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:18.492: INFO: Unable to read jessie_udp@dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:18.497: INFO: Unable to read jessie_tcp@dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:18.502: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:18.507: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:18.539: INFO: Lookups using dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5 failed for: [wheezy_udp@dns-test-service.dns-6449.svc.cluster.local wheezy_tcp@dns-test-service.dns-6449.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local jessie_udp@dns-test-service.dns-6449.svc.cluster.local jessie_tcp@dns-test-service.dns-6449.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local]

Aug 21 07:01:23.444: INFO: Unable to read wheezy_udp@dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:23.450: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:23.455: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:23.460: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:23.504: INFO: Unable to read jessie_udp@dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:23.508: INFO: Unable to read jessie_tcp@dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:23.512: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:23.516: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:23.543: INFO: Lookups using dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5 failed for: [wheezy_udp@dns-test-service.dns-6449.svc.cluster.local wheezy_tcp@dns-test-service.dns-6449.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local jessie_udp@dns-test-service.dns-6449.svc.cluster.local jessie_tcp@dns-test-service.dns-6449.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local]

Aug 21 07:01:28.444: INFO: Unable to read wheezy_udp@dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:28.449: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:28.453: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:28.457: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:28.487: INFO: Unable to read jessie_udp@dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:28.492: INFO: Unable to read jessie_tcp@dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:28.496: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:28.500: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:28.527: INFO: Lookups using dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5 failed for: [wheezy_udp@dns-test-service.dns-6449.svc.cluster.local wheezy_tcp@dns-test-service.dns-6449.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local jessie_udp@dns-test-service.dns-6449.svc.cluster.local jessie_tcp@dns-test-service.dns-6449.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local]

Aug 21 07:01:33.444: INFO: Unable to read wheezy_udp@dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:33.451: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:33.456: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:33.459: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:33.491: INFO: Unable to read jessie_udp@dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:33.496: INFO: Unable to read jessie_tcp@dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:33.500: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:33.505: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local from pod dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5: the server could not find the requested resource (get pods dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5)
Aug 21 07:01:33.531: INFO: Lookups using dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5 failed for: [wheezy_udp@dns-test-service.dns-6449.svc.cluster.local wheezy_tcp@dns-test-service.dns-6449.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local jessie_udp@dns-test-service.dns-6449.svc.cluster.local jessie_tcp@dns-test-service.dns-6449.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6449.svc.cluster.local]

Aug 21 07:01:38.524: INFO: DNS probes using dns-6449/dns-test-ec7edf44-c077-4754-941a-7ad1bbec33d5 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:01:39.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6449" for this suite.

• [SLOW TEST:37.514 seconds]
[sig-network] DNS
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":275,"completed":211,"skipped":3664,"failed":0}
SSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:01:39.368: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5984.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-5984.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5984.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-5984.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5984.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5984.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-5984.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5984.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-5984.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5984.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 21 07:01:45.573: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:01:45.578: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:01:45.582: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:01:45.586: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:01:45.600: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:01:45.604: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:01:45.609: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:01:45.613: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:01:45.622: INFO: Lookups using dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5984.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5984.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local jessie_udp@dns-test-service-2.dns-5984.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5984.svc.cluster.local]

Aug 21 07:01:50.630: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:01:50.635: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:01:50.640: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:01:50.644: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:01:50.657: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:01:50.662: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:01:50.667: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:01:50.672: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:01:50.679: INFO: Lookups using dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5984.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5984.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local jessie_udp@dns-test-service-2.dns-5984.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5984.svc.cluster.local]

Aug 21 07:01:55.629: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:01:55.635: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:01:55.641: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:01:55.646: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:01:55.657: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:01:55.661: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:01:55.665: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:01:55.669: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:01:55.677: INFO: Lookups using dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5984.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5984.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local jessie_udp@dns-test-service-2.dns-5984.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5984.svc.cluster.local]

Aug 21 07:02:00.630: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:02:00.636: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:02:00.645: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:02:00.649: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:02:00.661: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:02:00.666: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:02:00.669: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:02:00.673: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:02:00.681: INFO: Lookups using dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5984.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5984.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local jessie_udp@dns-test-service-2.dns-5984.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5984.svc.cluster.local]

Aug 21 07:02:05.630: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:02:05.636: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:02:05.642: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:02:05.647: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:02:05.661: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:02:05.665: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:02:05.670: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:02:05.674: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:02:05.682: INFO: Lookups using dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5984.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5984.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local jessie_udp@dns-test-service-2.dns-5984.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5984.svc.cluster.local]

Aug 21 07:02:10.630: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:02:10.636: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:02:10.641: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:02:10.647: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:02:10.661: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:02:10.666: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:02:10.671: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:02:10.676: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5984.svc.cluster.local from pod dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5: the server could not find the requested resource (get pods dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5)
Aug 21 07:02:10.685: INFO: Lookups using dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5984.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5984.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5984.svc.cluster.local jessie_udp@dns-test-service-2.dns-5984.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5984.svc.cluster.local]

Aug 21 07:02:15.675: INFO: DNS probes using dns-5984/dns-test-21b06c3e-51fb-49d3-a1dc-2413df6601b5 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:02:16.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5984" for this suite.

• [SLOW TEST:37.075 seconds]
[sig-network] DNS
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":212,"skipped":3668,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:02:16.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 07:02:16.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 21 07:02:35.474: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7801 create -f -'
Aug 21 07:02:39.748: INFO: stderr: ""
Aug 21 07:02:39.748: INFO: stdout: "e2e-test-crd-publish-openapi-1939-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Aug 21 07:02:39.749: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7801 delete e2e-test-crd-publish-openapi-1939-crds test-cr'
Aug 21 07:02:40.911: INFO: stderr: ""
Aug 21 07:02:40.911: INFO: stdout: "e2e-test-crd-publish-openapi-1939-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Aug 21 07:02:40.911: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7801 apply -f -'
Aug 21 07:02:42.394: INFO: stderr: ""
Aug 21 07:02:42.394: INFO: stdout: "e2e-test-crd-publish-openapi-1939-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Aug 21 07:02:42.394: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7801 delete e2e-test-crd-publish-openapi-1939-crds test-cr'
Aug 21 07:02:43.513: INFO: stderr: ""
Aug 21 07:02:43.514: INFO: stdout: "e2e-test-crd-publish-openapi-1939-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Aug 21 07:02:43.514: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1939-crds'
Aug 21 07:02:44.941: INFO: stderr: ""
Aug 21 07:02:44.941: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1939-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:03:03.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7801" for this suite.

• [SLOW TEST:47.254 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":213,"skipped":3675,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:03:03.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-e98ec731-f4ce-4085-b251-30e9d57a1339
STEP: Creating a pod to test consume secrets
Aug 21 07:03:03.803: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bfe37ef4-9b90-4893-b1ad-a51b005e4b21" in namespace "projected-6440" to be "Succeeded or Failed"
Aug 21 07:03:03.806: INFO: Pod "pod-projected-secrets-bfe37ef4-9b90-4893-b1ad-a51b005e4b21": Phase="Pending", Reason="", readiness=false. Elapsed: 3.08746ms
Aug 21 07:03:05.813: INFO: Pod "pod-projected-secrets-bfe37ef4-9b90-4893-b1ad-a51b005e4b21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010610762s
Aug 21 07:03:07.821: INFO: Pod "pod-projected-secrets-bfe37ef4-9b90-4893-b1ad-a51b005e4b21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017678115s
STEP: Saw pod success
Aug 21 07:03:07.821: INFO: Pod "pod-projected-secrets-bfe37ef4-9b90-4893-b1ad-a51b005e4b21" satisfied condition "Succeeded or Failed"
Aug 21 07:03:07.826: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-bfe37ef4-9b90-4893-b1ad-a51b005e4b21 container projected-secret-volume-test: 
STEP: delete the pod
Aug 21 07:03:07.861: INFO: Waiting for pod pod-projected-secrets-bfe37ef4-9b90-4893-b1ad-a51b005e4b21 to disappear
Aug 21 07:03:07.912: INFO: Pod pod-projected-secrets-bfe37ef4-9b90-4893-b1ad-a51b005e4b21 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:03:07.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6440" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":214,"skipped":3682,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:03:07.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 21 07:03:07.999: INFO: Waiting up to 5m0s for pod "pod-beded4c2-ba0e-42b7-9804-c88b96976204" in namespace "emptydir-5308" to be "Succeeded or Failed"
Aug 21 07:03:08.039: INFO: Pod "pod-beded4c2-ba0e-42b7-9804-c88b96976204": Phase="Pending", Reason="", readiness=false. Elapsed: 38.906214ms
Aug 21 07:03:10.087: INFO: Pod "pod-beded4c2-ba0e-42b7-9804-c88b96976204": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087065337s
Aug 21 07:03:12.093: INFO: Pod "pod-beded4c2-ba0e-42b7-9804-c88b96976204": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093886536s
STEP: Saw pod success
Aug 21 07:03:12.094: INFO: Pod "pod-beded4c2-ba0e-42b7-9804-c88b96976204" satisfied condition "Succeeded or Failed"
Aug 21 07:03:12.098: INFO: Trying to get logs from node kali-worker pod pod-beded4c2-ba0e-42b7-9804-c88b96976204 container test-container: 
STEP: delete the pod
Aug 21 07:03:12.135: INFO: Waiting for pod pod-beded4c2-ba0e-42b7-9804-c88b96976204 to disappear
Aug 21 07:03:12.518: INFO: Pod pod-beded4c2-ba0e-42b7-9804-c88b96976204 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:03:12.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5308" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":215,"skipped":3701,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:03:12.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test env composition
Aug 21 07:03:12.782: INFO: Waiting up to 5m0s for pod "var-expansion-6c955ec3-2940-4172-99c4-642e25d771c9" in namespace "var-expansion-641" to be "Succeeded or Failed"
Aug 21 07:03:12.814: INFO: Pod "var-expansion-6c955ec3-2940-4172-99c4-642e25d771c9": Phase="Pending", Reason="", readiness=false. Elapsed: 31.398593ms
Aug 21 07:03:14.826: INFO: Pod "var-expansion-6c955ec3-2940-4172-99c4-642e25d771c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043697012s
Aug 21 07:03:16.833: INFO: Pod "var-expansion-6c955ec3-2940-4172-99c4-642e25d771c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05030114s
STEP: Saw pod success
Aug 21 07:03:16.833: INFO: Pod "var-expansion-6c955ec3-2940-4172-99c4-642e25d771c9" satisfied condition "Succeeded or Failed"
Aug 21 07:03:16.837: INFO: Trying to get logs from node kali-worker2 pod var-expansion-6c955ec3-2940-4172-99c4-642e25d771c9 container dapi-container: 
STEP: delete the pod
Aug 21 07:03:16.870: INFO: Waiting for pod var-expansion-6c955ec3-2940-4172-99c4-642e25d771c9 to disappear
Aug 21 07:03:16.942: INFO: Pod var-expansion-6c955ec3-2940-4172-99c4-642e25d771c9 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:03:16.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-641" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":216,"skipped":3730,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:03:16.958: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 21 07:03:17.025: INFO: Waiting up to 5m0s for pod "pod-dd4c1e7b-1a2c-47d2-8c20-fd06a3c907f4" in namespace "emptydir-4795" to be "Succeeded or Failed"
Aug 21 07:03:17.057: INFO: Pod "pod-dd4c1e7b-1a2c-47d2-8c20-fd06a3c907f4": Phase="Pending", Reason="", readiness=false. Elapsed: 31.585874ms
Aug 21 07:03:19.065: INFO: Pod "pod-dd4c1e7b-1a2c-47d2-8c20-fd06a3c907f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03900357s
Aug 21 07:03:21.071: INFO: Pod "pod-dd4c1e7b-1a2c-47d2-8c20-fd06a3c907f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045215683s
STEP: Saw pod success
Aug 21 07:03:21.071: INFO: Pod "pod-dd4c1e7b-1a2c-47d2-8c20-fd06a3c907f4" satisfied condition "Succeeded or Failed"
Aug 21 07:03:21.075: INFO: Trying to get logs from node kali-worker pod pod-dd4c1e7b-1a2c-47d2-8c20-fd06a3c907f4 container test-container: 
STEP: delete the pod
Aug 21 07:03:21.107: INFO: Waiting for pod pod-dd4c1e7b-1a2c-47d2-8c20-fd06a3c907f4 to disappear
Aug 21 07:03:21.123: INFO: Pod pod-dd4c1e7b-1a2c-47d2-8c20-fd06a3c907f4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:03:21.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4795" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":217,"skipped":3744,"failed":0}

------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:03:21.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap configmap-408/configmap-test-caca3d94-d42b-474d-a7cb-abf4bb110364
STEP: Creating a pod to test consume configMaps
Aug 21 07:03:21.564: INFO: Waiting up to 5m0s for pod "pod-configmaps-25ba3505-c5b8-4843-8654-e325773abeab" in namespace "configmap-408" to be "Succeeded or Failed"
Aug 21 07:03:21.582: INFO: Pod "pod-configmaps-25ba3505-c5b8-4843-8654-e325773abeab": Phase="Pending", Reason="", readiness=false. Elapsed: 17.81217ms
Aug 21 07:03:23.645: INFO: Pod "pod-configmaps-25ba3505-c5b8-4843-8654-e325773abeab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080473907s
Aug 21 07:03:25.674: INFO: Pod "pod-configmaps-25ba3505-c5b8-4843-8654-e325773abeab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.109608036s
STEP: Saw pod success
Aug 21 07:03:25.674: INFO: Pod "pod-configmaps-25ba3505-c5b8-4843-8654-e325773abeab" satisfied condition "Succeeded or Failed"
Aug 21 07:03:25.680: INFO: Trying to get logs from node kali-worker pod pod-configmaps-25ba3505-c5b8-4843-8654-e325773abeab container env-test: 
STEP: delete the pod
Aug 21 07:03:25.706: INFO: Waiting for pod pod-configmaps-25ba3505-c5b8-4843-8654-e325773abeab to disappear
Aug 21 07:03:25.710: INFO: Pod pod-configmaps-25ba3505-c5b8-4843-8654-e325773abeab no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:03:25.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-408" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":218,"skipped":3744,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:03:25.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Aug 21 07:03:25.814: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 21 07:03:25.837: INFO: Waiting for terminating namespaces to be deleted...
Aug 21 07:03:25.841: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Aug 21 07:03:25.853: INFO: kindnet-kkxd5 from kube-system started at 2020-08-15 09:40:28 +0000 UTC (1 container statuses recorded)
Aug 21 07:03:25.853: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 21 07:03:25.853: INFO: kube-proxy-vn4t5 from kube-system started at 2020-08-15 09:40:28 +0000 UTC (1 container statuses recorded)
Aug 21 07:03:25.853: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 21 07:03:25.853: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Aug 21 07:03:25.863: INFO: kube-proxy-c52ll from kube-system started at 2020-08-15 09:40:30 +0000 UTC (1 container statuses recorded)
Aug 21 07:03:25.863: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 21 07:03:25.863: INFO: kindnet-qzfqb from kube-system started at 2020-08-15 09:40:30 +0000 UTC (1 container statuses recorded)
Aug 21 07:03:25.863: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-30443c24-b33c-440a-8a23-cf5f1d4a7264 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-30443c24-b33c-440a-8a23-cf5f1d4a7264 off the node kali-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-30443c24-b33c-440a-8a23-cf5f1d4a7264
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:08:34.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5097" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:308.385 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":219,"skipped":3763,"failed":0}
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:08:34.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-7c0adc56-3bb3-4cfa-b06e-9461e114441a
STEP: Creating a pod to test consume configMaps
Aug 21 07:08:34.229: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1a3e1f54-19bb-42ae-8d7b-e194233df471" in namespace "projected-3858" to be "Succeeded or Failed"
Aug 21 07:08:34.240: INFO: Pod "pod-projected-configmaps-1a3e1f54-19bb-42ae-8d7b-e194233df471": Phase="Pending", Reason="", readiness=false. Elapsed: 10.792797ms
Aug 21 07:08:36.331: INFO: Pod "pod-projected-configmaps-1a3e1f54-19bb-42ae-8d7b-e194233df471": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101286783s
Aug 21 07:08:38.336: INFO: Pod "pod-projected-configmaps-1a3e1f54-19bb-42ae-8d7b-e194233df471": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106889159s
Aug 21 07:08:40.416: INFO: Pod "pod-projected-configmaps-1a3e1f54-19bb-42ae-8d7b-e194233df471": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.186237156s
STEP: Saw pod success
Aug 21 07:08:40.416: INFO: Pod "pod-projected-configmaps-1a3e1f54-19bb-42ae-8d7b-e194233df471" satisfied condition "Succeeded or Failed"
Aug 21 07:08:40.514: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-1a3e1f54-19bb-42ae-8d7b-e194233df471 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 21 07:08:40.773: INFO: Waiting for pod pod-projected-configmaps-1a3e1f54-19bb-42ae-8d7b-e194233df471 to disappear
Aug 21 07:08:40.796: INFO: Pod pod-projected-configmaps-1a3e1f54-19bb-42ae-8d7b-e194233df471 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:08:40.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3858" for this suite.

• [SLOW TEST:6.743 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":220,"skipped":3763,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:08:40.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 21 07:08:40.936: INFO: Waiting up to 5m0s for pod "pod-a3b70595-894d-44c8-b36f-83a719f9ebc8" in namespace "emptydir-6098" to be "Succeeded or Failed"
Aug 21 07:08:41.013: INFO: Pod "pod-a3b70595-894d-44c8-b36f-83a719f9ebc8": Phase="Pending", Reason="", readiness=false. Elapsed: 77.203209ms
Aug 21 07:08:43.024: INFO: Pod "pod-a3b70595-894d-44c8-b36f-83a719f9ebc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088344568s
Aug 21 07:08:45.032: INFO: Pod "pod-a3b70595-894d-44c8-b36f-83a719f9ebc8": Phase="Running", Reason="", readiness=true. Elapsed: 4.095967579s
Aug 21 07:08:47.039: INFO: Pod "pod-a3b70595-894d-44c8-b36f-83a719f9ebc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.103608064s
STEP: Saw pod success
Aug 21 07:08:47.040: INFO: Pod "pod-a3b70595-894d-44c8-b36f-83a719f9ebc8" satisfied condition "Succeeded or Failed"
Aug 21 07:08:47.046: INFO: Trying to get logs from node kali-worker2 pod pod-a3b70595-894d-44c8-b36f-83a719f9ebc8 container test-container: 
STEP: delete the pod
Aug 21 07:08:47.138: INFO: Waiting for pod pod-a3b70595-894d-44c8-b36f-83a719f9ebc8 to disappear
Aug 21 07:08:47.173: INFO: Pod pod-a3b70595-894d-44c8-b36f-83a719f9ebc8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:08:47.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6098" for this suite.

• [SLOW TEST:6.334 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":221,"skipped":3766,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:08:47.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Aug 21 07:08:52.079: INFO: Successfully updated pod "annotationupdate674b62f4-7dad-49f3-95b1-803c4bae690d"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:08:54.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1321" for this suite.

• [SLOW TEST:7.237 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":222,"skipped":3771,"failed":0}
SSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:08:54.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 07:08:54.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Aug 21 07:08:55.191: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-21T07:08:55Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-21T07:08:55Z]] name:name1 resourceVersion:2033374 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:a4ae72ee-2568-481a-bb53-665c65f439a5] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Aug 21 07:09:05.203: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-21T07:09:05Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-21T07:09:05Z]] name:name2 resourceVersion:2033413 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:467c86ee-6ca6-401c-8d85-30b069747b9c] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Aug 21 07:09:15.217: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-21T07:08:55Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-21T07:09:15Z]] name:name1 resourceVersion:2033445 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:a4ae72ee-2568-481a-bb53-665c65f439a5] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Aug 21 07:09:25.229: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-21T07:09:05Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-21T07:09:25Z]] name:name2 resourceVersion:2033477 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:467c86ee-6ca6-401c-8d85-30b069747b9c] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Aug 21 07:09:35.243: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-21T07:08:55Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-21T07:09:15Z]] name:name1 resourceVersion:2033507 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:a4ae72ee-2568-481a-bb53-665c65f439a5] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Aug 21 07:09:45.257: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-21T07:09:05Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-21T07:09:25Z]] name:name2 resourceVersion:2033537 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:467c86ee-6ca6-401c-8d85-30b069747b9c] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:09:55.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-4186" for this suite.

• [SLOW TEST:61.364 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42
    watch on custom resource definition objects [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":223,"skipped":3774,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:09:55.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-82d74aa9-e808-49e7-a027-227203f4ed86
STEP: Creating a pod to test consume secrets
Aug 21 07:09:55.956: INFO: Waiting up to 5m0s for pod "pod-secrets-6c6578b8-5084-430e-9a1e-caff9050ed6e" in namespace "secrets-9428" to be "Succeeded or Failed"
Aug 21 07:09:55.974: INFO: Pod "pod-secrets-6c6578b8-5084-430e-9a1e-caff9050ed6e": Phase="Pending", Reason="", readiness=false. Elapsed: 17.630543ms
Aug 21 07:09:57.980: INFO: Pod "pod-secrets-6c6578b8-5084-430e-9a1e-caff9050ed6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023996015s
Aug 21 07:09:59.988: INFO: Pod "pod-secrets-6c6578b8-5084-430e-9a1e-caff9050ed6e": Phase="Running", Reason="", readiness=true. Elapsed: 4.032425091s
Aug 21 07:10:01.996: INFO: Pod "pod-secrets-6c6578b8-5084-430e-9a1e-caff9050ed6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.039983195s
STEP: Saw pod success
Aug 21 07:10:01.996: INFO: Pod "pod-secrets-6c6578b8-5084-430e-9a1e-caff9050ed6e" satisfied condition "Succeeded or Failed"
Aug 21 07:10:02.001: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-6c6578b8-5084-430e-9a1e-caff9050ed6e container secret-env-test: 
STEP: delete the pod
Aug 21 07:10:02.023: INFO: Waiting for pod pod-secrets-6c6578b8-5084-430e-9a1e-caff9050ed6e to disappear
Aug 21 07:10:02.027: INFO: Pod pod-secrets-6c6578b8-5084-430e-9a1e-caff9050ed6e no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:10:02.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9428" for this suite.

• [SLOW TEST:6.250 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":224,"skipped":3790,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:10:02.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0821 07:10:03.242770      10 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 21 07:10:03.243: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:10:03.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3172" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":225,"skipped":3800,"failed":0}

------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:10:03.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:10:03.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8738" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":226,"skipped":3800,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:10:03.503: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Aug 21 07:10:10.192: INFO: Successfully updated pod "adopt-release-9drrt"
STEP: Checking that the Job readopts the Pod
Aug 21 07:10:10.193: INFO: Waiting up to 15m0s for pod "adopt-release-9drrt" in namespace "job-8758" to be "adopted"
Aug 21 07:10:10.211: INFO: Pod "adopt-release-9drrt": Phase="Running", Reason="", readiness=true. Elapsed: 18.472673ms
Aug 21 07:10:12.218: INFO: Pod "adopt-release-9drrt": Phase="Running", Reason="", readiness=true. Elapsed: 2.025053243s
Aug 21 07:10:12.218: INFO: Pod "adopt-release-9drrt" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Aug 21 07:10:12.735: INFO: Successfully updated pod "adopt-release-9drrt"
STEP: Checking that the Job releases the Pod
Aug 21 07:10:12.735: INFO: Waiting up to 15m0s for pod "adopt-release-9drrt" in namespace "job-8758" to be "released"
Aug 21 07:10:12.760: INFO: Pod "adopt-release-9drrt": Phase="Running", Reason="", readiness=true. Elapsed: 24.087073ms
Aug 21 07:10:14.766: INFO: Pod "adopt-release-9drrt": Phase="Running", Reason="", readiness=true. Elapsed: 2.030872956s
Aug 21 07:10:14.767: INFO: Pod "adopt-release-9drrt" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:10:14.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-8758" for this suite.

• [SLOW TEST:11.277 seconds]
[sig-apps] Job
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":227,"skipped":3822,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:10:14.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 07:10:21.687: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 07:10:23.708: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733590621, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733590621, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733590621, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733590621, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 07:10:26.749: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 07:10:26.756: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5900-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:10:28.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6173" for this suite.
STEP: Destroying namespace "webhook-6173-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.341 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":228,"skipped":3826,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:10:28.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 21 07:10:28.257: INFO: Waiting up to 5m0s for pod "pod-bc4ef94f-379f-4996-977f-40a80c26d17d" in namespace "emptydir-1714" to be "Succeeded or Failed"
Aug 21 07:10:28.291: INFO: Pod "pod-bc4ef94f-379f-4996-977f-40a80c26d17d": Phase="Pending", Reason="", readiness=false. Elapsed: 34.023324ms
Aug 21 07:10:30.299: INFO: Pod "pod-bc4ef94f-379f-4996-977f-40a80c26d17d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041663526s
Aug 21 07:10:32.306: INFO: Pod "pod-bc4ef94f-379f-4996-977f-40a80c26d17d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048711824s
STEP: Saw pod success
Aug 21 07:10:32.306: INFO: Pod "pod-bc4ef94f-379f-4996-977f-40a80c26d17d" satisfied condition "Succeeded or Failed"
Aug 21 07:10:32.311: INFO: Trying to get logs from node kali-worker2 pod pod-bc4ef94f-379f-4996-977f-40a80c26d17d container test-container: 
STEP: delete the pod
Aug 21 07:10:32.371: INFO: Waiting for pod pod-bc4ef94f-379f-4996-977f-40a80c26d17d to disappear
Aug 21 07:10:32.380: INFO: Pod pod-bc4ef94f-379f-4996-977f-40a80c26d17d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:10:32.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1714" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":229,"skipped":3841,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:10:32.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:10:45.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3030" for this suite.

• [SLOW TEST:13.251 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":230,"skipped":3883,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:10:45.654: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-b40e3635-a746-481f-a0fa-0170b35db37f
STEP: Creating a pod to test consume secrets
Aug 21 07:10:45.766: INFO: Waiting up to 5m0s for pod "pod-secrets-abc45cc4-7774-4a7e-acb5-be1ccfa576e3" in namespace "secrets-2542" to be "Succeeded or Failed"
Aug 21 07:10:45.811: INFO: Pod "pod-secrets-abc45cc4-7774-4a7e-acb5-be1ccfa576e3": Phase="Pending", Reason="", readiness=false. Elapsed: 44.8076ms
Aug 21 07:10:47.818: INFO: Pod "pod-secrets-abc45cc4-7774-4a7e-acb5-be1ccfa576e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051608911s
Aug 21 07:10:49.826: INFO: Pod "pod-secrets-abc45cc4-7774-4a7e-acb5-be1ccfa576e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059353194s
STEP: Saw pod success
Aug 21 07:10:49.826: INFO: Pod "pod-secrets-abc45cc4-7774-4a7e-acb5-be1ccfa576e3" satisfied condition "Succeeded or Failed"
Aug 21 07:10:49.832: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-abc45cc4-7774-4a7e-acb5-be1ccfa576e3 container secret-volume-test: 
STEP: delete the pod
Aug 21 07:10:49.869: INFO: Waiting for pod pod-secrets-abc45cc4-7774-4a7e-acb5-be1ccfa576e3 to disappear
Aug 21 07:10:49.877: INFO: Pod pod-secrets-abc45cc4-7774-4a7e-acb5-be1ccfa576e3 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:10:49.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2542" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":231,"skipped":3936,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:10:49.892: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 21 07:10:49.982: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cf986111-c86e-4c33-8e81-600f68f3549b" in namespace "projected-5424" to be "Succeeded or Failed"
Aug 21 07:10:50.000: INFO: Pod "downwardapi-volume-cf986111-c86e-4c33-8e81-600f68f3549b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.127855ms
Aug 21 07:10:52.028: INFO: Pod "downwardapi-volume-cf986111-c86e-4c33-8e81-600f68f3549b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045573989s
Aug 21 07:10:54.036: INFO: Pod "downwardapi-volume-cf986111-c86e-4c33-8e81-600f68f3549b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053632219s
Aug 21 07:10:56.041: INFO: Pod "downwardapi-volume-cf986111-c86e-4c33-8e81-600f68f3549b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.058519406s
STEP: Saw pod success
Aug 21 07:10:56.041: INFO: Pod "downwardapi-volume-cf986111-c86e-4c33-8e81-600f68f3549b" satisfied condition "Succeeded or Failed"
Aug 21 07:10:56.064: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-cf986111-c86e-4c33-8e81-600f68f3549b container client-container: 
STEP: delete the pod
Aug 21 07:10:56.149: INFO: Waiting for pod downwardapi-volume-cf986111-c86e-4c33-8e81-600f68f3549b to disappear
Aug 21 07:10:56.154: INFO: Pod downwardapi-volume-cf986111-c86e-4c33-8e81-600f68f3549b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:10:56.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5424" for this suite.

• [SLOW TEST:6.296 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":232,"skipped":3938,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:10:56.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating secret secrets-1176/secret-test-34420eb2-4382-4eb1-8d14-ded46b4de3eb
STEP: Creating a pod to test consume secrets
Aug 21 07:10:56.300: INFO: Waiting up to 5m0s for pod "pod-configmaps-9f7642dc-16f0-4d27-ac59-6409d02fee43" in namespace "secrets-1176" to be "Succeeded or Failed"
Aug 21 07:10:56.321: INFO: Pod "pod-configmaps-9f7642dc-16f0-4d27-ac59-6409d02fee43": Phase="Pending", Reason="", readiness=false. Elapsed: 21.049828ms
Aug 21 07:10:58.388: INFO: Pod "pod-configmaps-9f7642dc-16f0-4d27-ac59-6409d02fee43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087913165s
Aug 21 07:11:00.394: INFO: Pod "pod-configmaps-9f7642dc-16f0-4d27-ac59-6409d02fee43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094464283s
STEP: Saw pod success
Aug 21 07:11:00.395: INFO: Pod "pod-configmaps-9f7642dc-16f0-4d27-ac59-6409d02fee43" satisfied condition "Succeeded or Failed"
Aug 21 07:11:00.399: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-9f7642dc-16f0-4d27-ac59-6409d02fee43 container env-test: 
STEP: delete the pod
Aug 21 07:11:00.458: INFO: Waiting for pod pod-configmaps-9f7642dc-16f0-4d27-ac59-6409d02fee43 to disappear
Aug 21 07:11:00.465: INFO: Pod pod-configmaps-9f7642dc-16f0-4d27-ac59-6409d02fee43 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:11:00.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1176" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":233,"skipped":3955,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:11:00.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5980.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5980.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5980.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5980.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 21 07:11:06.819: INFO: DNS probes using dns-test-3fb375de-1b09-46f6-8b29-3f8bb36f94a0 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5980.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5980.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5980.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5980.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 21 07:11:13.025: INFO: File wheezy_udp@dns-test-service-3.dns-5980.svc.cluster.local from pod  dns-5980/dns-test-93ab0c95-a89f-40b9-9fcc-15f11e51c559 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 21 07:11:13.029: INFO: File jessie_udp@dns-test-service-3.dns-5980.svc.cluster.local from pod  dns-5980/dns-test-93ab0c95-a89f-40b9-9fcc-15f11e51c559 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 21 07:11:13.029: INFO: Lookups using dns-5980/dns-test-93ab0c95-a89f-40b9-9fcc-15f11e51c559 failed for: [wheezy_udp@dns-test-service-3.dns-5980.svc.cluster.local jessie_udp@dns-test-service-3.dns-5980.svc.cluster.local]

Aug 21 07:11:18.038: INFO: File wheezy_udp@dns-test-service-3.dns-5980.svc.cluster.local from pod  dns-5980/dns-test-93ab0c95-a89f-40b9-9fcc-15f11e51c559 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 21 07:11:18.043: INFO: File jessie_udp@dns-test-service-3.dns-5980.svc.cluster.local from pod  dns-5980/dns-test-93ab0c95-a89f-40b9-9fcc-15f11e51c559 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 21 07:11:18.044: INFO: Lookups using dns-5980/dns-test-93ab0c95-a89f-40b9-9fcc-15f11e51c559 failed for: [wheezy_udp@dns-test-service-3.dns-5980.svc.cluster.local jessie_udp@dns-test-service-3.dns-5980.svc.cluster.local]

Aug 21 07:11:23.037: INFO: File wheezy_udp@dns-test-service-3.dns-5980.svc.cluster.local from pod  dns-5980/dns-test-93ab0c95-a89f-40b9-9fcc-15f11e51c559 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 21 07:11:23.043: INFO: File jessie_udp@dns-test-service-3.dns-5980.svc.cluster.local from pod  dns-5980/dns-test-93ab0c95-a89f-40b9-9fcc-15f11e51c559 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 21 07:11:23.043: INFO: Lookups using dns-5980/dns-test-93ab0c95-a89f-40b9-9fcc-15f11e51c559 failed for: [wheezy_udp@dns-test-service-3.dns-5980.svc.cluster.local jessie_udp@dns-test-service-3.dns-5980.svc.cluster.local]

Aug 21 07:11:28.038: INFO: File wheezy_udp@dns-test-service-3.dns-5980.svc.cluster.local from pod  dns-5980/dns-test-93ab0c95-a89f-40b9-9fcc-15f11e51c559 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 21 07:11:28.044: INFO: File jessie_udp@dns-test-service-3.dns-5980.svc.cluster.local from pod  dns-5980/dns-test-93ab0c95-a89f-40b9-9fcc-15f11e51c559 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 21 07:11:28.044: INFO: Lookups using dns-5980/dns-test-93ab0c95-a89f-40b9-9fcc-15f11e51c559 failed for: [wheezy_udp@dns-test-service-3.dns-5980.svc.cluster.local jessie_udp@dns-test-service-3.dns-5980.svc.cluster.local]

Aug 21 07:11:33.036: INFO: File wheezy_udp@dns-test-service-3.dns-5980.svc.cluster.local from pod  dns-5980/dns-test-93ab0c95-a89f-40b9-9fcc-15f11e51c559 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 21 07:11:33.040: INFO: File jessie_udp@dns-test-service-3.dns-5980.svc.cluster.local from pod  dns-5980/dns-test-93ab0c95-a89f-40b9-9fcc-15f11e51c559 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 21 07:11:33.040: INFO: Lookups using dns-5980/dns-test-93ab0c95-a89f-40b9-9fcc-15f11e51c559 failed for: [wheezy_udp@dns-test-service-3.dns-5980.svc.cluster.local jessie_udp@dns-test-service-3.dns-5980.svc.cluster.local]

Aug 21 07:11:38.922: INFO: DNS probes using dns-test-93ab0c95-a89f-40b9-9fcc-15f11e51c559 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5980.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5980.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5980.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5980.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 21 07:11:45.762: INFO: DNS probes using dns-test-90a5b7bc-4c96-4fa5-87c4-945d51b5067e succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:11:45.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5980" for this suite.

• [SLOW TEST:45.440 seconds]
[sig-network] DNS
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":234,"skipped":3992,"failed":0}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:11:45.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:12:03.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1608" for this suite.

• [SLOW TEST:17.603 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":235,"skipped":3994,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:12:03.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 07:12:15.635: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 07:12:17.730: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733590735, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733590735, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733590735, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733590735, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 07:12:20.817: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 07:12:20.849: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4528-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:12:21.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2083" for this suite.
STEP: Destroying namespace "webhook-2083-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:18.584 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":236,"skipped":4036,"failed":0}
SSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:12:22.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-091b1c4a-6b19-463e-bf7c-b24b72f36d77 in namespace container-probe-8253
Aug 21 07:12:26.289: INFO: Started pod liveness-091b1c4a-6b19-463e-bf7c-b24b72f36d77 in namespace container-probe-8253
STEP: checking the pod's current state and verifying that restartCount is present
Aug 21 07:12:26.293: INFO: Initial restart count of pod liveness-091b1c4a-6b19-463e-bf7c-b24b72f36d77 is 0
Aug 21 07:12:48.602: INFO: Restart count of pod container-probe-8253/liveness-091b1c4a-6b19-463e-bf7c-b24b72f36d77 is now 1 (22.308641689s elapsed)
Aug 21 07:13:08.700: INFO: Restart count of pod container-probe-8253/liveness-091b1c4a-6b19-463e-bf7c-b24b72f36d77 is now 2 (42.406840461s elapsed)
Aug 21 07:13:28.776: INFO: Restart count of pod container-probe-8253/liveness-091b1c4a-6b19-463e-bf7c-b24b72f36d77 is now 3 (1m2.482645301s elapsed)
Aug 21 07:13:46.865: INFO: Restart count of pod container-probe-8253/liveness-091b1c4a-6b19-463e-bf7c-b24b72f36d77 is now 4 (1m20.572383262s elapsed)
Aug 21 07:15:01.133: INFO: Restart count of pod container-probe-8253/liveness-091b1c4a-6b19-463e-bf7c-b24b72f36d77 is now 5 (2m34.840231256s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:15:01.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8253" for this suite.

• [SLOW TEST:159.120 seconds]
[k8s.io] Probing container
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":237,"skipped":4041,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:15:01.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:15:17.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8519" for this suite.

• [SLOW TEST:16.720 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":238,"skipped":4047,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:15:17.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:15:22.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5316" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":239,"skipped":4105,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:15:22.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 21 07:15:22.224: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e3398dc3-08cc-4428-bdda-d1c1c2abeb17" in namespace "downward-api-9436" to be "Succeeded or Failed"
Aug 21 07:15:22.284: INFO: Pod "downwardapi-volume-e3398dc3-08cc-4428-bdda-d1c1c2abeb17": Phase="Pending", Reason="", readiness=false. Elapsed: 58.77118ms
Aug 21 07:15:24.291: INFO: Pod "downwardapi-volume-e3398dc3-08cc-4428-bdda-d1c1c2abeb17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066423232s
Aug 21 07:15:26.299: INFO: Pod "downwardapi-volume-e3398dc3-08cc-4428-bdda-d1c1c2abeb17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073805095s
STEP: Saw pod success
Aug 21 07:15:26.299: INFO: Pod "downwardapi-volume-e3398dc3-08cc-4428-bdda-d1c1c2abeb17" satisfied condition "Succeeded or Failed"
Aug 21 07:15:26.304: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-e3398dc3-08cc-4428-bdda-d1c1c2abeb17 container client-container: 
STEP: delete the pod
Aug 21 07:15:26.357: INFO: Waiting for pod downwardapi-volume-e3398dc3-08cc-4428-bdda-d1c1c2abeb17 to disappear
Aug 21 07:15:26.371: INFO: Pod downwardapi-volume-e3398dc3-08cc-4428-bdda-d1c1c2abeb17 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:15:26.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9436" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":240,"skipped":4119,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:15:26.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 07:15:37.256: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 07:15:39.338: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733590937, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733590937, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733590937, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733590937, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 07:15:42.401: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:15:42.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4315" for this suite.
STEP: Destroying namespace "webhook-4315-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:16.203 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":241,"skipped":4138,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:15:42.605: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Aug 21 07:15:42.672: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6322 /api/v1/namespaces/watch-6322/configmaps/e2e-watch-test-configmap-a 06aa89fb-0083-4950-900c-06be4a3528e1 2035371 0 2020-08-21 07:15:42 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-21 07:15:42 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 21 07:15:42.673: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6322 /api/v1/namespaces/watch-6322/configmaps/e2e-watch-test-configmap-a 06aa89fb-0083-4950-900c-06be4a3528e1 2035371 0 2020-08-21 07:15:42 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-21 07:15:42 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Aug 21 07:15:52.688: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6322 /api/v1/namespaces/watch-6322/configmaps/e2e-watch-test-configmap-a 06aa89fb-0083-4950-900c-06be4a3528e1 2035421 0 2020-08-21 07:15:42 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-21 07:15:52 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 21 07:15:52.690: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6322 /api/v1/namespaces/watch-6322/configmaps/e2e-watch-test-configmap-a 06aa89fb-0083-4950-900c-06be4a3528e1 2035421 0 2020-08-21 07:15:42 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-21 07:15:52 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Aug 21 07:16:02.706: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6322 /api/v1/namespaces/watch-6322/configmaps/e2e-watch-test-configmap-a 06aa89fb-0083-4950-900c-06be4a3528e1 2035454 0 2020-08-21 07:15:42 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-21 07:16:02 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 21 07:16:02.708: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6322 /api/v1/namespaces/watch-6322/configmaps/e2e-watch-test-configmap-a 06aa89fb-0083-4950-900c-06be4a3528e1 2035454 0 2020-08-21 07:15:42 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-21 07:16:02 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Aug 21 07:16:12.721: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6322 /api/v1/namespaces/watch-6322/configmaps/e2e-watch-test-configmap-a 06aa89fb-0083-4950-900c-06be4a3528e1 2035486 0 2020-08-21 07:15:42 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-21 07:16:02 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 21 07:16:12.722: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6322 /api/v1/namespaces/watch-6322/configmaps/e2e-watch-test-configmap-a 06aa89fb-0083-4950-900c-06be4a3528e1 2035486 0 2020-08-21 07:15:42 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-21 07:16:02 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Aug 21 07:16:22.735: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-6322 /api/v1/namespaces/watch-6322/configmaps/e2e-watch-test-configmap-b 4b8a182b-10d1-456c-81f9-8559fa645d3b 2035516 0 2020-08-21 07:16:22 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-08-21 07:16:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 21 07:16:22.736: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-6322 /api/v1/namespaces/watch-6322/configmaps/e2e-watch-test-configmap-b 4b8a182b-10d1-456c-81f9-8559fa645d3b 2035516 0 2020-08-21 07:16:22 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-08-21 07:16:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Aug 21 07:16:32.746: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-6322 /api/v1/namespaces/watch-6322/configmaps/e2e-watch-test-configmap-b 4b8a182b-10d1-456c-81f9-8559fa645d3b 2035546 0 2020-08-21 07:16:22 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-08-21 07:16:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 21 07:16:32.747: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-6322 /api/v1/namespaces/watch-6322/configmaps/e2e-watch-test-configmap-b 4b8a182b-10d1-456c-81f9-8559fa645d3b 2035546 0 2020-08-21 07:16:22 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-08-21 07:16:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:16:42.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6322" for this suite.

• [SLOW TEST:60.159 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":242,"skipped":4156,"failed":0}
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:16:42.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3462.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3462.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 21 07:16:48.949: INFO: DNS probes using dns-3462/dns-test-a17840ca-d83a-4d48-8fcb-041458dff2e3 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:16:48.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3462" for this suite.

• [SLOW TEST:6.250 seconds]
[sig-network] DNS
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":275,"completed":243,"skipped":4156,"failed":0}
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:16:49.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-configmap-5z4r
STEP: Creating a pod to test atomic-volume-subpath
Aug 21 07:16:49.533: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-5z4r" in namespace "subpath-2759" to be "Succeeded or Failed"
Aug 21 07:16:49.542: INFO: Pod "pod-subpath-test-configmap-5z4r": Phase="Pending", Reason="", readiness=false. Elapsed: 9.092216ms
Aug 21 07:16:51.567: INFO: Pod "pod-subpath-test-configmap-5z4r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033752858s
Aug 21 07:16:53.574: INFO: Pod "pod-subpath-test-configmap-5z4r": Phase="Running", Reason="", readiness=true. Elapsed: 4.040831529s
Aug 21 07:16:55.581: INFO: Pod "pod-subpath-test-configmap-5z4r": Phase="Running", Reason="", readiness=true. Elapsed: 6.047787262s
Aug 21 07:16:57.588: INFO: Pod "pod-subpath-test-configmap-5z4r": Phase="Running", Reason="", readiness=true. Elapsed: 8.055052419s
Aug 21 07:16:59.597: INFO: Pod "pod-subpath-test-configmap-5z4r": Phase="Running", Reason="", readiness=true. Elapsed: 10.063297346s
Aug 21 07:17:01.605: INFO: Pod "pod-subpath-test-configmap-5z4r": Phase="Running", Reason="", readiness=true. Elapsed: 12.071481677s
Aug 21 07:17:03.613: INFO: Pod "pod-subpath-test-configmap-5z4r": Phase="Running", Reason="", readiness=true. Elapsed: 14.079487672s
Aug 21 07:17:05.621: INFO: Pod "pod-subpath-test-configmap-5z4r": Phase="Running", Reason="", readiness=true. Elapsed: 16.087793806s
Aug 21 07:17:07.627: INFO: Pod "pod-subpath-test-configmap-5z4r": Phase="Running", Reason="", readiness=true. Elapsed: 18.093465022s
Aug 21 07:17:09.634: INFO: Pod "pod-subpath-test-configmap-5z4r": Phase="Running", Reason="", readiness=true. Elapsed: 20.101162277s
Aug 21 07:17:11.642: INFO: Pod "pod-subpath-test-configmap-5z4r": Phase="Running", Reason="", readiness=true. Elapsed: 22.108957618s
Aug 21 07:17:13.651: INFO: Pod "pod-subpath-test-configmap-5z4r": Phase="Running", Reason="", readiness=true. Elapsed: 24.117904943s
Aug 21 07:17:15.657: INFO: Pod "pod-subpath-test-configmap-5z4r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.12387367s
STEP: Saw pod success
Aug 21 07:17:15.657: INFO: Pod "pod-subpath-test-configmap-5z4r" satisfied condition "Succeeded or Failed"
Aug 21 07:17:15.661: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-configmap-5z4r container test-container-subpath-configmap-5z4r: 
STEP: delete the pod
Aug 21 07:17:15.721: INFO: Waiting for pod pod-subpath-test-configmap-5z4r to disappear
Aug 21 07:17:15.725: INFO: Pod pod-subpath-test-configmap-5z4r no longer exists
STEP: Deleting pod pod-subpath-test-configmap-5z4r
Aug 21 07:17:15.725: INFO: Deleting pod "pod-subpath-test-configmap-5z4r" in namespace "subpath-2759"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:17:15.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2759" for this suite.

• [SLOW TEST:26.738 seconds]
[sig-storage] Subpath
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":244,"skipped":4159,"failed":0}
SSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:17:15.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Aug 21 07:17:15.813: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:17:23.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1356" for this suite.

• [SLOW TEST:7.820 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":245,"skipped":4165,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:17:23.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:17:57.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4021" for this suite.

• [SLOW TEST:34.127 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    when starting a container that exits
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":246,"skipped":4181,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:17:57.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Aug 21 07:18:02.362: INFO: Successfully updated pod "labelsupdate4575125b-36c5-4810-a6fe-c2dd89b76294"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:18:04.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6805" for this suite.

• [SLOW TEST:6.688 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":247,"skipped":4195,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:18:04.394: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating api versions
Aug 21 07:18:04.511: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config api-versions'
Aug 21 07:18:05.649: INFO: stderr: ""
Aug 21 07:18:05.649: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:18:05.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3205" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":275,"completed":248,"skipped":4199,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:18:05.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Aug 21 07:18:05.762: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Aug 21 07:18:14.970: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:18:14.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7185" for this suite.

• [SLOW TEST:9.340 seconds]
[k8s.io] Pods
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":249,"skipped":4233,"failed":0}
SS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:18:15.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Aug 21 07:18:15.129: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1167 /api/v1/namespaces/watch-1167/configmaps/e2e-watch-test-label-changed bf7dc920-dc92-44c3-81af-bd0835331e24 2036080 0 2020-08-21 07:18:15 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-21 07:18:15 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 21 07:18:15.130: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1167 /api/v1/namespaces/watch-1167/configmaps/e2e-watch-test-label-changed bf7dc920-dc92-44c3-81af-bd0835331e24 2036081 0 2020-08-21 07:18:15 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-21 07:18:15 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 21 07:18:15.132: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1167 /api/v1/namespaces/watch-1167/configmaps/e2e-watch-test-label-changed bf7dc920-dc92-44c3-81af-bd0835331e24 2036082 0 2020-08-21 07:18:15 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-21 07:18:15 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Aug 21 07:18:25.179: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1167 /api/v1/namespaces/watch-1167/configmaps/e2e-watch-test-label-changed bf7dc920-dc92-44c3-81af-bd0835331e24 2036123 0 2020-08-21 07:18:15 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-21 07:18:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 21 07:18:25.180: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1167 /api/v1/namespaces/watch-1167/configmaps/e2e-watch-test-label-changed bf7dc920-dc92-44c3-81af-bd0835331e24 2036124 0 2020-08-21 07:18:15 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-21 07:18:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 21 07:18:25.181: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1167 /api/v1/namespaces/watch-1167/configmaps/e2e-watch-test-label-changed bf7dc920-dc92-44c3-81af-bd0835331e24 2036125 0 2020-08-21 07:18:15 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-21 07:18:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:18:25.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1167" for this suite.

• [SLOW TEST:10.188 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":250,"skipped":4235,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:18:25.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 21 07:18:33.396: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 21 07:18:33.404: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 21 07:18:35.405: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 21 07:18:35.413: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 21 07:18:37.405: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 21 07:18:37.413: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 21 07:18:39.405: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 21 07:18:39.411: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:18:39.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7790" for this suite.

• [SLOW TEST:14.226 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":251,"skipped":4253,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:18:39.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Aug 21 07:18:39.493: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:18:47.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9546" for this suite.

• [SLOW TEST:8.347 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":252,"skipped":4278,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:18:47.777: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-5671
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-5671
I0821 07:18:47.945079      10 runners.go:190] Created replication controller with name: externalname-service, namespace: services-5671, replica count: 2
I0821 07:18:50.996628      10 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0821 07:18:53.997721      10 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 21 07:18:53.998: INFO: Creating new exec pod
Aug 21 07:18:59.060: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=services-5671 execpodk48n5 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Aug 21 07:19:03.141: INFO: stderr: "I0821 07:19:03.006309    3794 log.go:172] (0x2f06380) (0x2f063f0) Create stream\nI0821 07:19:03.008625    3794 log.go:172] (0x2f06380) (0x2f063f0) Stream added, broadcasting: 1\nI0821 07:19:03.021248    3794 log.go:172] (0x2f06380) Reply frame received for 1\nI0821 07:19:03.021859    3794 log.go:172] (0x2f06380) (0x2aafc70) Create stream\nI0821 07:19:03.021966    3794 log.go:172] (0x2f06380) (0x2aafc70) Stream added, broadcasting: 3\nI0821 07:19:03.023550    3794 log.go:172] (0x2f06380) Reply frame received for 3\nI0821 07:19:03.023980    3794 log.go:172] (0x2f06380) (0x2e0c070) Create stream\nI0821 07:19:03.024100    3794 log.go:172] (0x2f06380) (0x2e0c070) Stream added, broadcasting: 5\nI0821 07:19:03.031473    3794 log.go:172] (0x2f06380) Reply frame received for 5\nI0821 07:19:03.118790    3794 log.go:172] (0x2f06380) Data frame received for 5\nI0821 07:19:03.119164    3794 log.go:172] (0x2f06380) Data frame received for 3\nI0821 07:19:03.119349    3794 log.go:172] (0x2aafc70) (3) Data frame handling\nI0821 07:19:03.119508    3794 log.go:172] (0x2e0c070) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nI0821 07:19:03.121316    3794 log.go:172] (0x2e0c070) (5) Data frame sent\nI0821 07:19:03.121463    3794 log.go:172] (0x2f06380) Data frame received for 5\nI0821 07:19:03.121555    3794 log.go:172] (0x2e0c070) (5) Data frame handling\nI0821 07:19:03.121665    3794 log.go:172] (0x2e0c070) (5) Data frame sent\nI0821 07:19:03.121739    3794 log.go:172] (0x2f06380) Data frame received for 5\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0821 07:19:03.121809    3794 log.go:172] (0x2e0c070) (5) Data frame handling\nI0821 07:19:03.122707    3794 log.go:172] (0x2f06380) Data frame received for 1\nI0821 07:19:03.122846    3794 log.go:172] (0x2f063f0) (1) Data frame handling\nI0821 07:19:03.123007    3794 log.go:172] (0x2f063f0) (1) Data frame sent\nI0821 07:19:03.124177    3794 log.go:172] (0x2f06380) (0x2f063f0) Stream removed, broadcasting: 1\nI0821 07:19:03.126356    3794 log.go:172] (0x2f06380) Go away received\nI0821 07:19:03.127858    3794 log.go:172] (0x2f06380) (0x2f063f0) Stream removed, broadcasting: 1\nI0821 07:19:03.128031    3794 log.go:172] (0x2f06380) (0x2aafc70) Stream removed, broadcasting: 3\nI0821 07:19:03.128184    3794 log.go:172] (0x2f06380) (0x2e0c070) Stream removed, broadcasting: 5\n"
Aug 21 07:19:03.142: INFO: stdout: ""
Aug 21 07:19:03.146: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=services-5671 execpodk48n5 -- /bin/sh -x -c nc -zv -t -w 2 10.103.81.21 80'
Aug 21 07:19:04.533: INFO: stderr: "I0821 07:19:04.413838    3831 log.go:172] (0x2f9a000) (0x2f9a070) Create stream\nI0821 07:19:04.419422    3831 log.go:172] (0x2f9a000) (0x2f9a070) Stream added, broadcasting: 1\nI0821 07:19:04.433809    3831 log.go:172] (0x2f9a000) Reply frame received for 1\nI0821 07:19:04.434251    3831 log.go:172] (0x2f9a000) (0x28b8770) Create stream\nI0821 07:19:04.434318    3831 log.go:172] (0x2f9a000) (0x28b8770) Stream added, broadcasting: 3\nI0821 07:19:04.435298    3831 log.go:172] (0x2f9a000) Reply frame received for 3\nI0821 07:19:04.435516    3831 log.go:172] (0x2f9a000) (0x2a4a0e0) Create stream\nI0821 07:19:04.435583    3831 log.go:172] (0x2f9a000) (0x2a4a0e0) Stream added, broadcasting: 5\nI0821 07:19:04.436790    3831 log.go:172] (0x2f9a000) Reply frame received for 5\nI0821 07:19:04.513644    3831 log.go:172] (0x2f9a000) Data frame received for 5\nI0821 07:19:04.513966    3831 log.go:172] (0x2a4a0e0) (5) Data frame handling\nI0821 07:19:04.514616    3831 log.go:172] (0x2a4a0e0) (5) Data frame sent\n+ nc -zv -t -w 2 10.103.81.21 80\nI0821 07:19:04.515989    3831 log.go:172] (0x2f9a000) Data frame received for 5\nI0821 07:19:04.516134    3831 log.go:172] (0x2a4a0e0) (5) Data frame handling\nConnection to 10.103.81.21 80 port [tcp/http] succeeded!\nI0821 07:19:04.516232    3831 log.go:172] (0x2f9a000) Data frame received for 3\nI0821 07:19:04.516330    3831 log.go:172] (0x28b8770) (3) Data frame handling\nI0821 07:19:04.516437    3831 log.go:172] (0x2a4a0e0) (5) Data frame sent\nI0821 07:19:04.516576    3831 log.go:172] (0x2f9a000) Data frame received for 5\nI0821 07:19:04.516657    3831 log.go:172] (0x2a4a0e0) (5) Data frame handling\nI0821 07:19:04.517428    3831 log.go:172] (0x2f9a000) Data frame received for 1\nI0821 07:19:04.517523    3831 log.go:172] (0x2f9a070) (1) Data frame handling\nI0821 07:19:04.517654    3831 log.go:172] (0x2f9a070) (1) Data frame sent\nI0821 07:19:04.518648    3831 log.go:172] (0x2f9a000) (0x2f9a070) Stream removed, broadcasting: 1\nI0821 07:19:04.519245    3831 log.go:172] (0x2f9a000) Go away received\nI0821 07:19:04.521703    3831 log.go:172] (0x2f9a000) (0x2f9a070) Stream removed, broadcasting: 1\nI0821 07:19:04.521867    3831 log.go:172] (0x2f9a000) (0x28b8770) Stream removed, broadcasting: 3\nI0821 07:19:04.521988    3831 log.go:172] (0x2f9a000) (0x2a4a0e0) Stream removed, broadcasting: 5\n"
Aug 21 07:19:04.533: INFO: stdout: ""
Aug 21 07:19:04.534: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:19:04.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5671" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:16.803 seconds]
[sig-network] Services
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":253,"skipped":4290,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:19:04.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl label
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206
STEP: creating the pod
Aug 21 07:19:04.654: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9063'
Aug 21 07:19:06.256: INFO: stderr: ""
Aug 21 07:19:06.257: INFO: stdout: "pod/pause created\n"
Aug 21 07:19:06.257: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Aug 21 07:19:06.257: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9063" to be "running and ready"
Aug 21 07:19:06.273: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 15.682521ms
Aug 21 07:19:08.279: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022013283s
Aug 21 07:19:10.306: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.048851174s
Aug 21 07:19:10.306: INFO: Pod "pause" satisfied condition "running and ready"
Aug 21 07:19:10.306: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: adding the label testing-label with value testing-label-value to a pod
Aug 21 07:19:10.307: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-9063'
Aug 21 07:19:11.430: INFO: stderr: ""
Aug 21 07:19:11.430: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Aug 21 07:19:11.431: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9063'
Aug 21 07:19:12.521: INFO: stderr: ""
Aug 21 07:19:12.521: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          6s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Aug 21 07:19:12.522: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-9063'
Aug 21 07:19:13.664: INFO: stderr: ""
Aug 21 07:19:13.664: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Aug 21 07:19:13.665: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9063'
Aug 21 07:19:14.789: INFO: stderr: ""
Aug 21 07:19:14.789: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    \n"
[AfterEach] Kubectl label
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213
STEP: using delete to clean up resources
Aug 21 07:19:14.790: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9063'
Aug 21 07:19:15.980: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 21 07:19:15.981: INFO: stdout: "pod \"pause\" force deleted\n"
Aug 21 07:19:15.981: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-9063'
Aug 21 07:19:17.103: INFO: stderr: "No resources found in kubectl-9063 namespace.\n"
Aug 21 07:19:17.103: INFO: stdout: ""
Aug 21 07:19:17.104: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-9063 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 21 07:19:18.225: INFO: stderr: ""
Aug 21 07:19:18.225: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:19:18.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9063" for this suite.

• [SLOW TEST:13.656 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203
    should update the label on a resource  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":275,"completed":254,"skipped":4307,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:19:18.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:19:29.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1328" for this suite.

• [SLOW TEST:11.248 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":255,"skipped":4328,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:19:29.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:19:29.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9037" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":256,"skipped":4339,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:19:29.649: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-2d0aac20-7507-41a8-870d-c28c7d550ac8
STEP: Creating a pod to test consume configMaps
Aug 21 07:19:29.795: INFO: Waiting up to 5m0s for pod "pod-configmaps-f40d2cba-6f1f-4def-a413-80db3041ffdb" in namespace "configmap-3603" to be "Succeeded or Failed"
Aug 21 07:19:29.805: INFO: Pod "pod-configmaps-f40d2cba-6f1f-4def-a413-80db3041ffdb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.994549ms
Aug 21 07:19:32.084: INFO: Pod "pod-configmaps-f40d2cba-6f1f-4def-a413-80db3041ffdb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288208303s
Aug 21 07:19:34.091: INFO: Pod "pod-configmaps-f40d2cba-6f1f-4def-a413-80db3041ffdb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.295472849s
STEP: Saw pod success
Aug 21 07:19:34.091: INFO: Pod "pod-configmaps-f40d2cba-6f1f-4def-a413-80db3041ffdb" satisfied condition "Succeeded or Failed"
Aug 21 07:19:34.095: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-f40d2cba-6f1f-4def-a413-80db3041ffdb container configmap-volume-test: 
STEP: delete the pod
Aug 21 07:19:34.134: INFO: Waiting for pod pod-configmaps-f40d2cba-6f1f-4def-a413-80db3041ffdb to disappear
Aug 21 07:19:34.227: INFO: Pod pod-configmaps-f40d2cba-6f1f-4def-a413-80db3041ffdb no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:19:34.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3603" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":257,"skipped":4358,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:19:34.255: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 07:19:45.671: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 07:19:47.688: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733591185, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733591185, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733591185, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733591185, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 07:19:49.696: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733591185, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733591185, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733591185, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733591185, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 07:19:52.729: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:19:52.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3227" for this suite.
STEP: Destroying namespace "webhook-3227-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:18.649 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":258,"skipped":4360,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:19:52.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 07:19:58.761: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 07:20:00.790: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733591198, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733591198, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733591198, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733591198, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 07:20:03.929: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Aug 21 07:20:03.958: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:20:04.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3487" for this suite.
STEP: Destroying namespace "webhook-3487-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.208 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":259,"skipped":4375,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:20:04.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-4a93761b-e68f-417b-a4b4-b50cb838261b
STEP: Creating a pod to test consume configMaps
Aug 21 07:20:04.210: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-672cb664-523a-4887-bf50-81b49ff41c73" in namespace "projected-7910" to be "Succeeded or Failed"
Aug 21 07:20:04.234: INFO: Pod "pod-projected-configmaps-672cb664-523a-4887-bf50-81b49ff41c73": Phase="Pending", Reason="", readiness=false. Elapsed: 24.44852ms
Aug 21 07:20:06.242: INFO: Pod "pod-projected-configmaps-672cb664-523a-4887-bf50-81b49ff41c73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032276549s
Aug 21 07:20:08.249: INFO: Pod "pod-projected-configmaps-672cb664-523a-4887-bf50-81b49ff41c73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038954261s
STEP: Saw pod success
Aug 21 07:20:08.249: INFO: Pod "pod-projected-configmaps-672cb664-523a-4887-bf50-81b49ff41c73" satisfied condition "Succeeded or Failed"
Aug 21 07:20:08.254: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-672cb664-523a-4887-bf50-81b49ff41c73 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 21 07:20:08.300: INFO: Waiting for pod pod-projected-configmaps-672cb664-523a-4887-bf50-81b49ff41c73 to disappear
Aug 21 07:20:08.312: INFO: Pod pod-projected-configmaps-672cb664-523a-4887-bf50-81b49ff41c73 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:20:08.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7910" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":260,"skipped":4400,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:20:08.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0821 07:20:20.666145      10 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 21 07:20:20.666: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:20:20.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2858" for this suite.

• [SLOW TEST:12.356 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":261,"skipped":4418,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:20:20.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod test-webserver-19533e96-a2a8-4200-9a22-9e8e10bcbe3a in namespace container-probe-909
Aug 21 07:20:24.829: INFO: Started pod test-webserver-19533e96-a2a8-4200-9a22-9e8e10bcbe3a in namespace container-probe-909
STEP: checking the pod's current state and verifying that restartCount is present
Aug 21 07:20:24.833: INFO: Initial restart count of pod test-webserver-19533e96-a2a8-4200-9a22-9e8e10bcbe3a is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:24:25.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-909" for this suite.

• [SLOW TEST:245.204 seconds]
[k8s.io] Probing container
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":262,"skipped":4430,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:24:25.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 21 07:24:26.330: INFO: Waiting up to 5m0s for pod "downwardapi-volume-41d6451a-f98e-4518-86f7-0d0d6711c36b" in namespace "projected-6919" to be "Succeeded or Failed"
Aug 21 07:24:26.334: INFO: Pod "downwardapi-volume-41d6451a-f98e-4518-86f7-0d0d6711c36b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.381688ms
Aug 21 07:24:28.510: INFO: Pod "downwardapi-volume-41d6451a-f98e-4518-86f7-0d0d6711c36b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180320223s
Aug 21 07:24:30.537: INFO: Pod "downwardapi-volume-41d6451a-f98e-4518-86f7-0d0d6711c36b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.207510959s
STEP: Saw pod success
Aug 21 07:24:30.538: INFO: Pod "downwardapi-volume-41d6451a-f98e-4518-86f7-0d0d6711c36b" satisfied condition "Succeeded or Failed"
Aug 21 07:24:30.581: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-41d6451a-f98e-4518-86f7-0d0d6711c36b container client-container: 
STEP: delete the pod
Aug 21 07:24:30.718: INFO: Waiting for pod downwardapi-volume-41d6451a-f98e-4518-86f7-0d0d6711c36b to disappear
Aug 21 07:24:30.730: INFO: Pod downwardapi-volume-41d6451a-f98e-4518-86f7-0d0d6711c36b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:24:30.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6919" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":263,"skipped":4451,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:24:30.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating Agnhost RC
Aug 21 07:24:30.864: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1187'
Aug 21 07:24:32.328: INFO: stderr: ""
Aug 21 07:24:32.328: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 21 07:24:33.336: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 07:24:33.337: INFO: Found 0 / 1
Aug 21 07:24:34.337: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 07:24:34.337: INFO: Found 0 / 1
Aug 21 07:24:35.335: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 07:24:35.336: INFO: Found 0 / 1
Aug 21 07:24:36.336: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 07:24:36.336: INFO: Found 1 / 1
Aug 21 07:24:36.336: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Aug 21 07:24:36.343: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 07:24:36.343: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 21 07:24:36.343: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config patch pod agnhost-master-gdxnk --namespace=kubectl-1187 -p {"metadata":{"annotations":{"x":"y"}}}'
Aug 21 07:24:37.501: INFO: stderr: ""
Aug 21 07:24:37.501: INFO: stdout: "pod/agnhost-master-gdxnk patched\n"
STEP: checking annotations
Aug 21 07:24:37.507: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 07:24:37.507: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:24:37.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1187" for this suite.

• [SLOW TEST:6.771 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363
    should add annotations for pods in rc  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":275,"completed":264,"skipped":4473,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:24:37.525: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-fd37074e-1b83-47a9-852f-373461443d96
STEP: Creating a pod to test consume secrets
Aug 21 07:24:37.673: INFO: Waiting up to 5m0s for pod "pod-secrets-a8445f1d-b8b2-40f9-ab6a-c13f76196c83" in namespace "secrets-5936" to be "Succeeded or Failed"
Aug 21 07:24:37.695: INFO: Pod "pod-secrets-a8445f1d-b8b2-40f9-ab6a-c13f76196c83": Phase="Pending", Reason="", readiness=false. Elapsed: 21.453396ms
Aug 21 07:24:39.701: INFO: Pod "pod-secrets-a8445f1d-b8b2-40f9-ab6a-c13f76196c83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027754991s
Aug 21 07:24:41.708: INFO: Pod "pod-secrets-a8445f1d-b8b2-40f9-ab6a-c13f76196c83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034359979s
STEP: Saw pod success
Aug 21 07:24:41.708: INFO: Pod "pod-secrets-a8445f1d-b8b2-40f9-ab6a-c13f76196c83" satisfied condition "Succeeded or Failed"
Aug 21 07:24:41.714: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-a8445f1d-b8b2-40f9-ab6a-c13f76196c83 container secret-volume-test: 
STEP: delete the pod
Aug 21 07:24:41.771: INFO: Waiting for pod pod-secrets-a8445f1d-b8b2-40f9-ab6a-c13f76196c83 to disappear
Aug 21 07:24:41.784: INFO: Pod pod-secrets-a8445f1d-b8b2-40f9-ab6a-c13f76196c83 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:24:41.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5936" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":265,"skipped":4499,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:24:41.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name secret-emptykey-test-4073f662-3211-409a-bc7e-f3a65995200f
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:24:41.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4888" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":266,"skipped":4530,"failed":0}
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:24:41.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Aug 21 07:24:41.960: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 21 07:24:41.983: INFO: Waiting for terminating namespaces to be deleted...
Aug 21 07:24:41.988: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Aug 21 07:24:42.034: INFO: kindnet-kkxd5 from kube-system started at 2020-08-15 09:40:28 +0000 UTC (1 container statuses recorded)
Aug 21 07:24:42.034: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 21 07:24:42.034: INFO: agnhost-master-gdxnk from kubectl-1187 started at 2020-08-21 07:24:32 +0000 UTC (1 container statuses recorded)
Aug 21 07:24:42.034: INFO: 	Container agnhost-master ready: true, restart count 0
Aug 21 07:24:42.034: INFO: kube-proxy-vn4t5 from kube-system started at 2020-08-15 09:40:28 +0000 UTC (1 container statuses recorded)
Aug 21 07:24:42.034: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 21 07:24:42.034: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Aug 21 07:24:42.046: INFO: kindnet-qzfqb from kube-system started at 2020-08-15 09:40:30 +0000 UTC (1 container statuses recorded)
Aug 21 07:24:42.046: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 21 07:24:42.046: INFO: kube-proxy-c52ll from kube-system started at 2020-08-15 09:40:30 +0000 UTC (1 container statuses recorded)
Aug 21 07:24:42.046: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-ceb703f1-ce66-4ba5-a1c5-5560f8a47183 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-ceb703f1-ce66-4ba5-a1c5-5560f8a47183 off the node kali-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-ceb703f1-ce66-4ba5-a1c5-5560f8a47183
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:25:00.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5684" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:18.435 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":267,"skipped":4535,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:25:00.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 21 07:25:00.441: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dd6ebe3d-4343-408e-b06e-4558ce5d07eb" in namespace "downward-api-7757" to be "Succeeded or Failed"
Aug 21 07:25:00.456: INFO: Pod "downwardapi-volume-dd6ebe3d-4343-408e-b06e-4558ce5d07eb": Phase="Pending", Reason="", readiness=false. Elapsed: 14.968862ms
Aug 21 07:25:02.550: INFO: Pod "downwardapi-volume-dd6ebe3d-4343-408e-b06e-4558ce5d07eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109183276s
Aug 21 07:25:04.557: INFO: Pod "downwardapi-volume-dd6ebe3d-4343-408e-b06e-4558ce5d07eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.116666069s
STEP: Saw pod success
Aug 21 07:25:04.558: INFO: Pod "downwardapi-volume-dd6ebe3d-4343-408e-b06e-4558ce5d07eb" satisfied condition "Succeeded or Failed"
Aug 21 07:25:04.562: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-dd6ebe3d-4343-408e-b06e-4558ce5d07eb container client-container: 
STEP: delete the pod
Aug 21 07:25:04.642: INFO: Waiting for pod downwardapi-volume-dd6ebe3d-4343-408e-b06e-4558ce5d07eb to disappear
Aug 21 07:25:04.647: INFO: Pod downwardapi-volume-dd6ebe3d-4343-408e-b06e-4558ce5d07eb no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:25:04.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7757" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":268,"skipped":4562,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:25:04.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 07:25:04.734: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Aug 21 07:25:04.763: INFO: Pod name sample-pod: Found 0 pods out of 1
Aug 21 07:25:09.769: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 21 07:25:09.770: INFO: Creating deployment "test-rolling-update-deployment"
Aug 21 07:25:09.776: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Aug 21 07:25:09.817: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Aug 21 07:25:11.935: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Aug 21 07:25:11.959: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733591509, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733591509, loc:(*time.Location)(0x62a11f0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733591509, loc:(*time.Location)(0x62a11f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733591509, loc:(*time.Location)(0x62a11f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-59d5cb45c7\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 07:25:14.053: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Aug 21 07:25:14.109: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-6507 /apis/apps/v1/namespaces/deployment-6507/deployments/test-rolling-update-deployment ce752f42-29de-4d40-bf54-a46be72fa096 2038160 1 2020-08-21 07:25:09 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  [{e2e.test Update apps/v1 2020-08-21 07:25:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-21 07:25:13 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x9c5da98  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-21 07:25:09 +0000 UTC,LastTransitionTime:2020-08-21 07:25:09 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-59d5cb45c7" has successfully progressed.,LastUpdateTime:2020-08-21 07:25:13 +0000 UTC,LastTransitionTime:2020-08-21 07:25:09 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Aug 21 07:25:14.118: INFO: New ReplicaSet "test-rolling-update-deployment-59d5cb45c7" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7  deployment-6507 /apis/apps/v1/namespaces/deployment-6507/replicasets/test-rolling-update-deployment-59d5cb45c7 6b75afd1-b8df-432f-b65c-9720a06b6890 2038148 1 2020-08-21 07:25:09 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment ce752f42-29de-4d40-bf54-a46be72fa096 0x9c5dfd7 0x9c5dfd8}] []  [{kube-controller-manager Update apps/v1 2020-08-21 07:25:13 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 101 55 53 50 102 52 50 45 50 57 100 101 45 52 100 52 48 45 98 102 53 52 45 97 52 54 98 101 55 50 102 97 48 57 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 59d5cb45c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xab02078  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 21 07:25:14.118: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Aug 21 07:25:14.120: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-6507 /apis/apps/v1/namespaces/deployment-6507/replicasets/test-rolling-update-controller e042808c-04b2-4844-99f0-3ed7f39309b5 2038159 2 2020-08-21 07:25:04 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment ce752f42-29de-4d40-bf54-a46be72fa096 0x9c5dea7 0x9c5dea8}] []  [{e2e.test Update apps/v1 2020-08-21 07:25:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-21 07:25:13 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 101 55 53 50 102 52 50 45 50 57 100 101 45 52 100 52 48 45 98 102 53 52 45 97 52 54 98 101 55 50 102 97 48 57 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x9c5df48  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 21 07:25:14.129: INFO: Pod "test-rolling-update-deployment-59d5cb45c7-4gvjr" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7-4gvjr test-rolling-update-deployment-59d5cb45c7- deployment-6507 /api/v1/namespaces/deployment-6507/pods/test-rolling-update-deployment-59d5cb45c7-4gvjr b8d68ae4-36bb-4f2f-8493-8c57b791af18 2038147 0 2020-08-21 07:25:09 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-59d5cb45c7 6b75afd1-b8df-432f-b65c-9720a06b6890 0xab02567 0xab02568}] []  [{kube-controller-manager Update v1 2020-08-21 07:25:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 98 55 53 97 102 100 49 45 98 56 100 102 45 52 51 50 102 45 98 54 53 99 45 57 55 50 48 97 48 54 98 54 56 57 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 07:25:13 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 54 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jw9vl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jw9vl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jw9vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 07:25:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 07:25:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 07:25:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 07:25:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.61,StartTime:2020-08-21 07:25:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 07:25:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://295d23f2e41ec4e0b2ebf800e2c8f11910d54aec3ad71e25de7c4b1ae731c4b0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.61,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:25:14.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6507" for this suite.

• [SLOW TEST:9.477 seconds]
[sig-apps] Deployment
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":269,"skipped":4609,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:25:14.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 07:25:14.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-3848
I0821 07:25:14.418905      10 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3848, replica count: 1
I0821 07:25:15.470297      10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0821 07:25:16.471173      10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0821 07:25:17.471839      10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 21 07:25:17.609: INFO: Created: latency-svc-84bpj
Aug 21 07:25:17.628: INFO: Got endpoints: latency-svc-84bpj [52.898656ms]
Aug 21 07:25:17.669: INFO: Created: latency-svc-tt64j
Aug 21 07:25:17.711: INFO: Got endpoints: latency-svc-tt64j [82.160604ms]
Aug 21 07:25:17.763: INFO: Created: latency-svc-tzjr4
Aug 21 07:25:17.795: INFO: Got endpoints: latency-svc-tzjr4 [166.26896ms]
Aug 21 07:25:17.885: INFO: Created: latency-svc-nzbln
Aug 21 07:25:17.921: INFO: Got endpoints: latency-svc-nzbln [291.615243ms]
Aug 21 07:25:17.950: INFO: Created: latency-svc-6rbg4
Aug 21 07:25:17.963: INFO: Got endpoints: latency-svc-6rbg4 [334.653676ms]
Aug 21 07:25:18.035: INFO: Created: latency-svc-4c54t
Aug 21 07:25:18.046: INFO: Got endpoints: latency-svc-4c54t [416.983073ms]
Aug 21 07:25:18.046: INFO: Created: latency-svc-mqdt2
Aug 21 07:25:18.070: INFO: Got endpoints: latency-svc-mqdt2 [441.182787ms]
Aug 21 07:25:18.118: INFO: Created: latency-svc-lpkfg
Aug 21 07:25:18.133: INFO: Got endpoints: latency-svc-lpkfg [503.925244ms]
Aug 21 07:25:18.190: INFO: Created: latency-svc-ct6m8
Aug 21 07:25:18.215: INFO: Got endpoints: latency-svc-ct6m8 [585.966333ms]
Aug 21 07:25:18.243: INFO: Created: latency-svc-l7cjg
Aug 21 07:25:18.257: INFO: Got endpoints: latency-svc-l7cjg [628.125359ms]
Aug 21 07:25:18.341: INFO: Created: latency-svc-mfhts
Aug 21 07:25:18.344: INFO: Got endpoints: latency-svc-mfhts [714.751266ms]
Aug 21 07:25:18.412: INFO: Created: latency-svc-gv7sz
Aug 21 07:25:18.412: INFO: Created: latency-svc-6lmh8
Aug 21 07:25:18.426: INFO: Got endpoints: latency-svc-6lmh8 [796.475204ms]
Aug 21 07:25:18.426: INFO: Got endpoints: latency-svc-gv7sz [797.110606ms]
Aug 21 07:25:18.490: INFO: Created: latency-svc-lkdpw
Aug 21 07:25:18.493: INFO: Got endpoints: latency-svc-lkdpw [864.696393ms]
Aug 21 07:25:18.537: INFO: Created: latency-svc-2494f
Aug 21 07:25:18.558: INFO: Got endpoints: latency-svc-2494f [928.959504ms]
Aug 21 07:25:18.641: INFO: Created: latency-svc-rsgbv
Aug 21 07:25:18.677: INFO: Got endpoints: latency-svc-rsgbv [1.048052461s]
Aug 21 07:25:18.712: INFO: Created: latency-svc-tmxh8
Aug 21 07:25:18.726: INFO: Got endpoints: latency-svc-tmxh8 [1.015319685s]
Aug 21 07:25:18.772: INFO: Created: latency-svc-rctg7
Aug 21 07:25:18.775: INFO: Got endpoints: latency-svc-rctg7 [980.253485ms]
Aug 21 07:25:18.826: INFO: Created: latency-svc-7jjx7
Aug 21 07:25:18.841: INFO: Got endpoints: latency-svc-7jjx7 [919.98379ms]
Aug 21 07:25:18.903: INFO: Created: latency-svc-cwd8k
Aug 21 07:25:18.938: INFO: Got endpoints: latency-svc-cwd8k [974.779462ms]
Aug 21 07:25:18.963: INFO: Created: latency-svc-xdlwn
Aug 21 07:25:19.030: INFO: Got endpoints: latency-svc-xdlwn [984.353253ms]
Aug 21 07:25:19.054: INFO: Created: latency-svc-xwgzj
Aug 21 07:25:19.070: INFO: Got endpoints: latency-svc-xwgzj [998.846303ms]
Aug 21 07:25:19.095: INFO: Created: latency-svc-cvdst
Aug 21 07:25:19.125: INFO: Got endpoints: latency-svc-cvdst [991.596532ms]
Aug 21 07:25:19.179: INFO: Created: latency-svc-tcxg9
Aug 21 07:25:19.222: INFO: Got endpoints: latency-svc-tcxg9 [1.006810405s]
Aug 21 07:25:19.306: INFO: Created: latency-svc-7hqqv
Aug 21 07:25:19.311: INFO: Got endpoints: latency-svc-7hqqv [1.054106883s]
Aug 21 07:25:19.348: INFO: Created: latency-svc-sdbft
Aug 21 07:25:19.359: INFO: Got endpoints: latency-svc-sdbft [1.014820698s]
Aug 21 07:25:19.377: INFO: Created: latency-svc-vhv6v
Aug 21 07:25:19.389: INFO: Got endpoints: latency-svc-vhv6v [962.406749ms]
Aug 21 07:25:19.437: INFO: Created: latency-svc-4h4j5
Aug 21 07:25:19.450: INFO: Got endpoints: latency-svc-4h4j5 [1.023689513s]
Aug 21 07:25:19.480: INFO: Created: latency-svc-7bdrh
Aug 21 07:25:19.492: INFO: Got endpoints: latency-svc-7bdrh [998.400351ms]
Aug 21 07:25:19.509: INFO: Created: latency-svc-nv9mt
Aug 21 07:25:19.574: INFO: Got endpoints: latency-svc-nv9mt [1.015881329s]
Aug 21 07:25:19.587: INFO: Created: latency-svc-jg74t
Aug 21 07:25:19.600: INFO: Got endpoints: latency-svc-jg74t [923.085231ms]
Aug 21 07:25:19.642: INFO: Created: latency-svc-kp94k
Aug 21 07:25:19.655: INFO: Got endpoints: latency-svc-kp94k [928.205119ms]
Aug 21 07:25:19.720: INFO: Created: latency-svc-bpxb9
Aug 21 07:25:19.744: INFO: Got endpoints: latency-svc-bpxb9 [968.684341ms]
Aug 21 07:25:19.773: INFO: Created: latency-svc-7fzp4
Aug 21 07:25:19.787: INFO: Got endpoints: latency-svc-7fzp4 [945.702585ms]
Aug 21 07:25:19.856: INFO: Created: latency-svc-qm89s
Aug 21 07:25:19.911: INFO: Got endpoints: latency-svc-qm89s [972.714543ms]
Aug 21 07:25:19.947: INFO: Created: latency-svc-bdcmt
Aug 21 07:25:19.988: INFO: Got endpoints: latency-svc-bdcmt [957.202834ms]
Aug 21 07:25:20.080: INFO: Created: latency-svc-k7j5f
Aug 21 07:25:20.120: INFO: Got endpoints: latency-svc-k7j5f [1.049819004s]
Aug 21 07:25:20.171: INFO: Created: latency-svc-t2j2z
Aug 21 07:25:20.187: INFO: Got endpoints: latency-svc-t2j2z [1.061655108s]
Aug 21 07:25:20.260: INFO: Created: latency-svc-j9d5m
Aug 21 07:25:20.275: INFO: Got endpoints: latency-svc-j9d5m [1.052820394s]
Aug 21 07:25:20.612: INFO: Created: latency-svc-fj9mq
Aug 21 07:25:20.683: INFO: Got endpoints: latency-svc-fj9mq [1.371090639s]
Aug 21 07:25:20.769: INFO: Created: latency-svc-n546j
Aug 21 07:25:20.789: INFO: Got endpoints: latency-svc-n546j [1.429976393s]
Aug 21 07:25:20.835: INFO: Created: latency-svc-kpz7t
Aug 21 07:25:20.852: INFO: Got endpoints: latency-svc-kpz7t [1.462805268s]
Aug 21 07:25:21.112: INFO: Created: latency-svc-r7lp6
Aug 21 07:25:21.252: INFO: Got endpoints: latency-svc-r7lp6 [1.802259631s]
Aug 21 07:25:21.310: INFO: Created: latency-svc-4frlq
Aug 21 07:25:21.326: INFO: Got endpoints: latency-svc-4frlq [1.834126524s]
Aug 21 07:25:21.395: INFO: Created: latency-svc-b4jrr
Aug 21 07:25:21.410: INFO: Got endpoints: latency-svc-b4jrr [1.836526672s]
Aug 21 07:25:21.436: INFO: Created: latency-svc-kjfw5
Aug 21 07:25:21.446: INFO: Got endpoints: latency-svc-kjfw5 [1.845544539s]
Aug 21 07:25:21.489: INFO: Created: latency-svc-tmtdn
Aug 21 07:25:21.551: INFO: Got endpoints: latency-svc-tmtdn [1.895983365s]
Aug 21 07:25:21.585: INFO: Created: latency-svc-zl82z
Aug 21 07:25:21.597: INFO: Got endpoints: latency-svc-zl82z [1.852241138s]
Aug 21 07:25:21.621: INFO: Created: latency-svc-s62rl
Aug 21 07:25:21.633: INFO: Got endpoints: latency-svc-s62rl [1.845955467s]
Aug 21 07:25:21.712: INFO: Created: latency-svc-xbsqp
Aug 21 07:25:21.719: INFO: Got endpoints: latency-svc-xbsqp [1.807500865s]
Aug 21 07:25:21.747: INFO: Created: latency-svc-bqfmw
Aug 21 07:25:21.773: INFO: Got endpoints: latency-svc-bqfmw [1.784873269s]
Aug 21 07:25:21.892: INFO: Created: latency-svc-65qdk
Aug 21 07:25:21.899: INFO: Got endpoints: latency-svc-65qdk [1.778982582s]
Aug 21 07:25:21.927: INFO: Created: latency-svc-c5hxr
Aug 21 07:25:21.946: INFO: Got endpoints: latency-svc-c5hxr [1.758721735s]
Aug 21 07:25:21.987: INFO: Created: latency-svc-w4qpk
Aug 21 07:25:22.042: INFO: Got endpoints: latency-svc-w4qpk [1.766300643s]
Aug 21 07:25:22.072: INFO: Created: latency-svc-f25m2
Aug 21 07:25:22.084: INFO: Got endpoints: latency-svc-f25m2 [1.400893524s]
Aug 21 07:25:22.126: INFO: Created: latency-svc-nt7j5
Aug 21 07:25:22.185: INFO: Got endpoints: latency-svc-nt7j5 [1.395570552s]
Aug 21 07:25:22.221: INFO: Created: latency-svc-f8vnz
Aug 21 07:25:22.234: INFO: Got endpoints: latency-svc-f8vnz [1.382293614s]
Aug 21 07:25:22.341: INFO: Created: latency-svc-r7747
Aug 21 07:25:22.356: INFO: Got endpoints: latency-svc-r7747 [1.103568868s]
Aug 21 07:25:22.397: INFO: Created: latency-svc-97qbh
Aug 21 07:25:22.422: INFO: Got endpoints: latency-svc-97qbh [1.095153893s]
Aug 21 07:25:22.473: INFO: Created: latency-svc-pqpz6
Aug 21 07:25:22.497: INFO: Got endpoints: latency-svc-pqpz6 [1.086630222s]
Aug 21 07:25:22.535: INFO: Created: latency-svc-ms5r8
Aug 21 07:25:22.549: INFO: Got endpoints: latency-svc-ms5r8 [1.102236552s]
Aug 21 07:25:22.672: INFO: Created: latency-svc-pkp6r
Aug 21 07:25:22.680: INFO: Got endpoints: latency-svc-pkp6r [1.128613293s]
Aug 21 07:25:22.714: INFO: Created: latency-svc-hq2xp
Aug 21 07:25:22.740: INFO: Got endpoints: latency-svc-hq2xp [1.143133212s]
Aug 21 07:25:22.767: INFO: Created: latency-svc-jntdq
Aug 21 07:25:22.821: INFO: Got endpoints: latency-svc-jntdq [1.187801283s]
Aug 21 07:25:22.844: INFO: Created: latency-svc-bnt5x
Aug 21 07:25:22.873: INFO: Got endpoints: latency-svc-bnt5x [1.154236351s]
Aug 21 07:25:22.975: INFO: Created: latency-svc-t7z8b
Aug 21 07:25:22.982: INFO: Got endpoints: latency-svc-t7z8b [1.208522944s]
Aug 21 07:25:23.006: INFO: Created: latency-svc-bgwx8
Aug 21 07:25:23.038: INFO: Got endpoints: latency-svc-bgwx8 [1.138486044s]
Aug 21 07:25:23.073: INFO: Created: latency-svc-pwsmw
Aug 21 07:25:23.126: INFO: Got endpoints: latency-svc-pwsmw [1.180081789s]
Aug 21 07:25:23.181: INFO: Created: latency-svc-dzvmp
Aug 21 07:25:23.203: INFO: Got endpoints: latency-svc-dzvmp [1.160840087s]
Aug 21 07:25:23.224: INFO: Created: latency-svc-7g8g2
Aug 21 07:25:23.264: INFO: Got endpoints: latency-svc-7g8g2 [1.180136305s]
Aug 21 07:25:23.295: INFO: Created: latency-svc-m5qdn
Aug 21 07:25:23.331: INFO: Got endpoints: latency-svc-m5qdn [1.146230258s]
Aug 21 07:25:23.401: INFO: Created: latency-svc-9gm2v
Aug 21 07:25:23.404: INFO: Got endpoints: latency-svc-9gm2v [1.169163923s]
Aug 21 07:25:23.439: INFO: Created: latency-svc-gvcdq
Aug 21 07:25:23.450: INFO: Got endpoints: latency-svc-gvcdq [1.093180677s]
Aug 21 07:25:23.469: INFO: Created: latency-svc-vt67r
Aug 21 07:25:23.500: INFO: Got endpoints: latency-svc-vt67r [1.078342201s]
Aug 21 07:25:23.562: INFO: Created: latency-svc-d8g6v
Aug 21 07:25:23.568: INFO: Got endpoints: latency-svc-d8g6v [1.06996167s]
Aug 21 07:25:23.589: INFO: Created: latency-svc-ss6gz
Aug 21 07:25:23.601: INFO: Got endpoints: latency-svc-ss6gz [1.051899484s]
Aug 21 07:25:23.620: INFO: Created: latency-svc-z65ln
Aug 21 07:25:23.644: INFO: Got endpoints: latency-svc-z65ln [963.50767ms]
Aug 21 07:25:23.694: INFO: Created: latency-svc-q2qgv
Aug 21 07:25:23.697: INFO: Got endpoints: latency-svc-q2qgv [956.55821ms]
Aug 21 07:25:23.722: INFO: Created: latency-svc-pgz7f
Aug 21 07:25:23.733: INFO: Got endpoints: latency-svc-pgz7f [911.68267ms]
Aug 21 07:25:23.757: INFO: Created: latency-svc-shftl
Aug 21 07:25:23.769: INFO: Got endpoints: latency-svc-shftl [895.798492ms]
Aug 21 07:25:23.786: INFO: Created: latency-svc-bq7nr
Aug 21 07:25:23.844: INFO: Got endpoints: latency-svc-bq7nr [861.877474ms]
Aug 21 07:25:23.871: INFO: Created: latency-svc-rtxsp
Aug 21 07:25:23.895: INFO: Got endpoints: latency-svc-rtxsp [857.24187ms]
Aug 21 07:25:23.926: INFO: Created: latency-svc-k95kg
Aug 21 07:25:23.938: INFO: Got endpoints: latency-svc-k95kg [811.930871ms]
Aug 21 07:25:24.009: INFO: Created: latency-svc-lxvq5
Aug 21 07:25:24.029: INFO: Got endpoints: latency-svc-lxvq5 [825.989119ms]
Aug 21 07:25:24.065: INFO: Created: latency-svc-jvq5l
Aug 21 07:25:24.143: INFO: Got endpoints: latency-svc-jvq5l [878.343264ms]
Aug 21 07:25:24.147: INFO: Created: latency-svc-tnxhq
Aug 21 07:25:24.155: INFO: Got endpoints: latency-svc-tnxhq [823.191418ms]
Aug 21 07:25:24.177: INFO: Created: latency-svc-8thhj
Aug 21 07:25:24.193: INFO: Got endpoints: latency-svc-8thhj [789.637522ms]
Aug 21 07:25:24.213: INFO: Created: latency-svc-22slk
Aug 21 07:25:24.232: INFO: Got endpoints: latency-svc-22slk [781.734774ms]
Aug 21 07:25:24.293: INFO: Created: latency-svc-f7648
Aug 21 07:25:24.296: INFO: Got endpoints: latency-svc-f7648 [795.202306ms]
Aug 21 07:25:24.347: INFO: Created: latency-svc-lqh9w
Aug 21 07:25:24.361: INFO: Got endpoints: latency-svc-lqh9w [793.265779ms]
Aug 21 07:25:24.381: INFO: Created: latency-svc-rcvxc
Aug 21 07:25:24.424: INFO: Got endpoints: latency-svc-rcvxc [823.322044ms]
Aug 21 07:25:24.441: INFO: Created: latency-svc-sfsd6
Aug 21 07:25:24.452: INFO: Got endpoints: latency-svc-sfsd6 [808.138871ms]
Aug 21 07:25:24.471: INFO: Created: latency-svc-p2xwm
Aug 21 07:25:24.483: INFO: Got endpoints: latency-svc-p2xwm [785.539025ms]
Aug 21 07:25:24.514: INFO: Created: latency-svc-sfl7r
Aug 21 07:25:24.561: INFO: Got endpoints: latency-svc-sfl7r [827.980609ms]
Aug 21 07:25:24.603: INFO: Created: latency-svc-qm84r
Aug 21 07:25:24.616: INFO: Got endpoints: latency-svc-qm84r [846.102456ms]
Aug 21 07:25:24.699: INFO: Created: latency-svc-dn7wf
Aug 21 07:25:24.731: INFO: Got endpoints: latency-svc-dn7wf [886.783346ms]
Aug 21 07:25:24.760: INFO: Created: latency-svc-6h2t4
Aug 21 07:25:24.795: INFO: Got endpoints: latency-svc-6h2t4 [900.164293ms]
Aug 21 07:25:24.808: INFO: Created: latency-svc-hd5pj
Aug 21 07:25:24.821: INFO: Got endpoints: latency-svc-hd5pj [882.152862ms]
Aug 21 07:25:24.842: INFO: Created: latency-svc-29262
Aug 21 07:25:24.856: INFO: Got endpoints: latency-svc-29262 [826.775731ms]
Aug 21 07:25:24.872: INFO: Created: latency-svc-7mpjn
Aug 21 07:25:24.887: INFO: Got endpoints: latency-svc-7mpjn [743.621322ms]
Aug 21 07:25:24.987: INFO: Created: latency-svc-lbgg8
Aug 21 07:25:24.991: INFO: Got endpoints: latency-svc-lbgg8 [836.261841ms]
Aug 21 07:25:25.041: INFO: Created: latency-svc-dfpwl
Aug 21 07:25:25.055: INFO: Got endpoints: latency-svc-dfpwl [861.563562ms]
Aug 21 07:25:25.076: INFO: Created: latency-svc-rgstr
Aug 21 07:25:25.132: INFO: Got endpoints: latency-svc-rgstr [900.108113ms]
Aug 21 07:25:25.155: INFO: Created: latency-svc-6gtfz
Aug 21 07:25:25.191: INFO: Got endpoints: latency-svc-6gtfz [895.006962ms]
Aug 21 07:25:25.275: INFO: Created: latency-svc-6jktn
Aug 21 07:25:25.278: INFO: Got endpoints: latency-svc-6jktn [916.788075ms]
Aug 21 07:25:25.322: INFO: Created: latency-svc-l92sn
Aug 21 07:25:25.505: INFO: Got endpoints: latency-svc-l92sn [1.08041836s]
Aug 21 07:25:25.508: INFO: Created: latency-svc-q8sqq
Aug 21 07:25:25.512: INFO: Got endpoints: latency-svc-q8sqq [1.060189197s]
Aug 21 07:25:25.532: INFO: Created: latency-svc-5v49v
Aug 21 07:25:25.549: INFO: Got endpoints: latency-svc-5v49v [1.066120738s]
Aug 21 07:25:25.582: INFO: Created: latency-svc-dkx59
Aug 21 07:25:25.598: INFO: Got endpoints: latency-svc-dkx59 [1.036277329s]
Aug 21 07:25:25.658: INFO: Created: latency-svc-zr9gw
Aug 21 07:25:25.664: INFO: Got endpoints: latency-svc-zr9gw [1.047584492s]
Aug 21 07:25:25.683: INFO: Created: latency-svc-7bvhf
Aug 21 07:25:25.694: INFO: Got endpoints: latency-svc-7bvhf [963.063003ms]
Aug 21 07:25:25.713: INFO: Created: latency-svc-8g7t8
Aug 21 07:25:25.725: INFO: Got endpoints: latency-svc-8g7t8 [929.65609ms]
Aug 21 07:25:25.744: INFO: Created: latency-svc-tbfp7
Aug 21 07:25:25.755: INFO: Got endpoints: latency-svc-tbfp7 [933.817679ms]
Aug 21 07:25:25.820: INFO: Created: latency-svc-2h7lj
Aug 21 07:25:25.826: INFO: Got endpoints: latency-svc-2h7lj [970.174592ms]
Aug 21 07:25:25.851: INFO: Created: latency-svc-wjcp5
Aug 21 07:25:25.898: INFO: Got endpoints: latency-svc-wjcp5 [1.011406521s]
Aug 21 07:25:25.976: INFO: Created: latency-svc-7mqkt
Aug 21 07:25:25.989: INFO: Got endpoints: latency-svc-7mqkt [997.777906ms]
Aug 21 07:25:26.032: INFO: Created: latency-svc-lsw6n
Aug 21 07:25:26.048: INFO: Got endpoints: latency-svc-lsw6n [992.457872ms]
Aug 21 07:25:26.113: INFO: Created: latency-svc-pbhxc
Aug 21 07:25:26.133: INFO: Got endpoints: latency-svc-pbhxc [1.00025036s]
Aug 21 07:25:26.156: INFO: Created: latency-svc-tqcdr
Aug 21 07:25:26.169: INFO: Got endpoints: latency-svc-tqcdr [977.75499ms]
Aug 21 07:25:26.186: INFO: Created: latency-svc-9g2ps
Aug 21 07:25:26.269: INFO: Got endpoints: latency-svc-9g2ps [990.301407ms]
Aug 21 07:25:26.283: INFO: Created: latency-svc-frwqx
Aug 21 07:25:26.295: INFO: Got endpoints: latency-svc-frwqx [789.893147ms]
Aug 21 07:25:26.318: INFO: Created: latency-svc-7qr9w
Aug 21 07:25:26.332: INFO: Got endpoints: latency-svc-7qr9w [818.952919ms]
Aug 21 07:25:26.355: INFO: Created: latency-svc-8vx2k
Aug 21 07:25:26.412: INFO: Got endpoints: latency-svc-8vx2k [863.180583ms]
Aug 21 07:25:26.421: INFO: Created: latency-svc-gs9k8
Aug 21 07:25:26.435: INFO: Got endpoints: latency-svc-gs9k8 [836.613652ms]
Aug 21 07:25:26.457: INFO: Created: latency-svc-5prpj
Aug 21 07:25:26.470: INFO: Got endpoints: latency-svc-5prpj [806.676649ms]
Aug 21 07:25:26.486: INFO: Created: latency-svc-tg4t6
Aug 21 07:25:26.502: INFO: Got endpoints: latency-svc-tg4t6 [807.525575ms]
Aug 21 07:25:26.557: INFO: Created: latency-svc-wlrxq
Aug 21 07:25:26.563: INFO: Got endpoints: latency-svc-wlrxq [836.904267ms]
Aug 21 07:25:26.656: INFO: Created: latency-svc-8zxwk
Aug 21 07:25:26.712: INFO: Got endpoints: latency-svc-8zxwk [957.527344ms]
Aug 21 07:25:26.733: INFO: Created: latency-svc-cdkn2
Aug 21 07:25:26.748: INFO: Got endpoints: latency-svc-cdkn2 [921.539535ms]
Aug 21 07:25:26.769: INFO: Created: latency-svc-pkv8z
Aug 21 07:25:26.779: INFO: Got endpoints: latency-svc-pkv8z [879.89541ms]
Aug 21 07:25:26.798: INFO: Created: latency-svc-gsb6s
Aug 21 07:25:26.898: INFO: Got endpoints: latency-svc-gsb6s [908.271165ms]
Aug 21 07:25:26.902: INFO: Created: latency-svc-qhjsb
Aug 21 07:25:26.916: INFO: Got endpoints: latency-svc-qhjsb [868.409783ms]
Aug 21 07:25:26.992: INFO: Created: latency-svc-r745p
Aug 21 07:25:27.029: INFO: Got endpoints: latency-svc-r745p [896.527993ms]
Aug 21 07:25:27.046: INFO: Created: latency-svc-kpr7t
Aug 21 07:25:27.063: INFO: Got endpoints: latency-svc-kpr7t [893.671024ms]
Aug 21 07:25:27.081: INFO: Created: latency-svc-28jtx
Aug 21 07:25:27.091: INFO: Got endpoints: latency-svc-28jtx [822.249987ms]
Aug 21 07:25:27.112: INFO: Created: latency-svc-f2trf
Aug 21 07:25:27.128: INFO: Got endpoints: latency-svc-f2trf [832.435722ms]
Aug 21 07:25:27.174: INFO: Created: latency-svc-fjstn
Aug 21 07:25:27.188: INFO: Got endpoints: latency-svc-fjstn [856.441662ms]
Aug 21 07:25:27.218: INFO: Created: latency-svc-xkn9l
Aug 21 07:25:27.230: INFO: Got endpoints: latency-svc-xkn9l [817.624338ms]
Aug 21 07:25:27.272: INFO: Created: latency-svc-vxtls
Aug 21 07:25:27.321: INFO: Created: latency-svc-f9k25
Aug 21 07:25:27.322: INFO: Got endpoints: latency-svc-vxtls [887.310302ms]
Aug 21 07:25:27.333: INFO: Got endpoints: latency-svc-f9k25 [862.225057ms]
Aug 21 07:25:27.393: INFO: Created: latency-svc-jtptn
Aug 21 07:25:27.442: INFO: Got endpoints: latency-svc-jtptn [940.47116ms]
Aug 21 07:25:27.459: INFO: Created: latency-svc-gfjfc
Aug 21 07:25:27.484: INFO: Got endpoints: latency-svc-gfjfc [920.876649ms]
Aug 21 07:25:27.519: INFO: Created: latency-svc-ktn2s
Aug 21 07:25:27.542: INFO: Got endpoints: latency-svc-ktn2s [829.354257ms]
Aug 21 07:25:27.592: INFO: Created: latency-svc-p5pjb
Aug 21 07:25:27.599: INFO: Got endpoints: latency-svc-p5pjb [851.12489ms]
Aug 21 07:25:27.623: INFO: Created: latency-svc-zb9h9
Aug 21 07:25:27.636: INFO: Got endpoints: latency-svc-zb9h9 [856.844973ms]
Aug 21 07:25:27.668: INFO: Created: latency-svc-q4m87
Aug 21 07:25:27.735: INFO: Got endpoints: latency-svc-q4m87 [837.389035ms]
Aug 21 07:25:27.777: INFO: Created: latency-svc-lnfb8
Aug 21 07:25:27.788: INFO: Got endpoints: latency-svc-lnfb8 [871.11305ms]
Aug 21 07:25:27.831: INFO: Created: latency-svc-b4b8b
Aug 21 07:25:27.874: INFO: Got endpoints: latency-svc-b4b8b [844.203117ms]
Aug 21 07:25:27.884: INFO: Created: latency-svc-k2vm2
Aug 21 07:25:27.916: INFO: Got endpoints: latency-svc-k2vm2 [853.167616ms]
Aug 21 07:25:27.951: INFO: Created: latency-svc-xlwzm
Aug 21 07:25:27.969: INFO: Got endpoints: latency-svc-xlwzm [877.693892ms]
Aug 21 07:25:28.018: INFO: Created: latency-svc-z28f9
Aug 21 07:25:28.059: INFO: Created: latency-svc-fbt85
Aug 21 07:25:28.061: INFO: Got endpoints: latency-svc-z28f9 [933.005241ms]
Aug 21 07:25:28.070: INFO: Got endpoints: latency-svc-fbt85 [881.904332ms]
Aug 21 07:25:28.101: INFO: Created: latency-svc-867cg
Aug 21 07:25:28.113: INFO: Got endpoints: latency-svc-867cg [882.367293ms]
Aug 21 07:25:28.167: INFO: Created: latency-svc-pzfrx
Aug 21 07:25:28.185: INFO: Got endpoints: latency-svc-pzfrx [862.641563ms]
Aug 21 07:25:28.227: INFO: Created: latency-svc-gcdlc
Aug 21 07:25:28.239: INFO: Got endpoints: latency-svc-gcdlc [906.347018ms]
Aug 21 07:25:28.262: INFO: Created: latency-svc-pvfd5
Aug 21 07:25:28.306: INFO: Got endpoints: latency-svc-pvfd5 [862.962392ms]
Aug 21 07:25:28.328: INFO: Created: latency-svc-4p6l7
Aug 21 07:25:28.360: INFO: Got endpoints: latency-svc-4p6l7 [875.795676ms]
Aug 21 07:25:28.389: INFO: Created: latency-svc-8d5k6
Aug 21 07:25:28.402: INFO: Got endpoints: latency-svc-8d5k6 [860.097988ms]
Aug 21 07:25:28.443: INFO: Created: latency-svc-ws2g8
Aug 21 07:25:28.451: INFO: Got endpoints: latency-svc-ws2g8 [851.08047ms]
Aug 21 07:25:28.467: INFO: Created: latency-svc-kmxqb
Aug 21 07:25:28.481: INFO: Got endpoints: latency-svc-kmxqb [844.769976ms]
Aug 21 07:25:28.495: INFO: Created: latency-svc-82hcl
Aug 21 07:25:28.511: INFO: Got endpoints: latency-svc-82hcl [775.299182ms]
Aug 21 07:25:28.531: INFO: Created: latency-svc-4wz4j
Aug 21 07:25:28.605: INFO: Got endpoints: latency-svc-4wz4j [817.318691ms]
Aug 21 07:25:28.638: INFO: Created: latency-svc-msgpw
Aug 21 07:25:28.649: INFO: Got endpoints: latency-svc-msgpw [775.341643ms]
Aug 21 07:25:28.689: INFO: Created: latency-svc-b6pvd
Aug 21 07:25:28.703: INFO: Got endpoints: latency-svc-b6pvd [786.657701ms]
Aug 21 07:25:28.781: INFO: Created: latency-svc-mdmtc
Aug 21 07:25:28.789: INFO: Got endpoints: latency-svc-mdmtc [819.713758ms]
Aug 21 07:25:28.808: INFO: Created: latency-svc-92vsc
Aug 21 07:25:28.831: INFO: Got endpoints: latency-svc-92vsc [770.177978ms]
Aug 21 07:25:28.862: INFO: Created: latency-svc-6cgln
Aug 21 07:25:28.916: INFO: Got endpoints: latency-svc-6cgln [845.379699ms]
Aug 21 07:25:28.941: INFO: Created: latency-svc-ccj57
Aug 21 07:25:28.968: INFO: Got endpoints: latency-svc-ccj57 [855.210415ms]
Aug 21 07:25:29.007: INFO: Created: latency-svc-pzx9v
Aug 21 07:25:29.054: INFO: Got endpoints: latency-svc-pzx9v [868.665976ms]
Aug 21 07:25:29.437: INFO: Created: latency-svc-l2f6z
Aug 21 07:25:29.444: INFO: Got endpoints: latency-svc-l2f6z [1.204027508s]
Aug 21 07:25:29.682: INFO: Created: latency-svc-qh2ct
Aug 21 07:25:29.707: INFO: Created: latency-svc-sw9wv
Aug 21 07:25:29.708: INFO: Got endpoints: latency-svc-qh2ct [1.402147005s]
Aug 21 07:25:29.726: INFO: Got endpoints: latency-svc-sw9wv [1.366707054s]
Aug 21 07:25:29.826: INFO: Created: latency-svc-vdbll
Aug 21 07:25:29.852: INFO: Created: latency-svc-vw9lf
Aug 21 07:25:29.853: INFO: Got endpoints: latency-svc-vdbll [1.450532195s]
Aug 21 07:25:29.887: INFO: Got endpoints: latency-svc-vw9lf [1.436223548s]
Aug 21 07:25:29.964: INFO: Created: latency-svc-x9vsh
Aug 21 07:25:29.985: INFO: Got endpoints: latency-svc-x9vsh [1.504248661s]
Aug 21 07:25:30.026: INFO: Created: latency-svc-kbcqf
Aug 21 07:25:30.043: INFO: Got endpoints: latency-svc-kbcqf [1.53187334s]
Aug 21 07:25:30.062: INFO: Created: latency-svc-rr9wr
Aug 21 07:25:30.125: INFO: Got endpoints: latency-svc-rr9wr [1.519737656s]
Aug 21 07:25:30.127: INFO: Created: latency-svc-klppp
Aug 21 07:25:30.138: INFO: Got endpoints: latency-svc-klppp [1.488782521s]
Aug 21 07:25:30.170: INFO: Created: latency-svc-zvmbj
Aug 21 07:25:30.188: INFO: Got endpoints: latency-svc-zvmbj [1.484177689s]
Aug 21 07:25:30.205: INFO: Created: latency-svc-smzml
Aug 21 07:25:30.305: INFO: Got endpoints: latency-svc-smzml [1.516107559s]
Aug 21 07:25:30.307: INFO: Created: latency-svc-wd8xj
Aug 21 07:25:30.325: INFO: Got endpoints: latency-svc-wd8xj [1.493483637s]
Aug 21 07:25:30.348: INFO: Created: latency-svc-bj6sj
Aug 21 07:25:30.361: INFO: Got endpoints: latency-svc-bj6sj [1.444994812s]
Aug 21 07:25:30.385: INFO: Created: latency-svc-4pj6q
Aug 21 07:25:30.399: INFO: Got endpoints: latency-svc-4pj6q [1.430236164s]
Aug 21 07:25:30.455: INFO: Created: latency-svc-szw4t
Aug 21 07:25:30.482: INFO: Created: latency-svc-ts867
Aug 21 07:25:30.484: INFO: Got endpoints: latency-svc-szw4t [1.42938292s]
Aug 21 07:25:30.505: INFO: Got endpoints: latency-svc-ts867 [1.06160945s]
Aug 21 07:25:30.536: INFO: Created: latency-svc-dvp25
Aug 21 07:25:30.549: INFO: Got endpoints: latency-svc-dvp25 [840.659623ms]
Aug 21 07:25:30.622: INFO: Created: latency-svc-565vg
Aug 21 07:25:30.625: INFO: Got endpoints: latency-svc-565vg [898.107886ms]
Aug 21 07:25:30.717: INFO: Created: latency-svc-hfq97
Aug 21 07:25:30.788: INFO: Got endpoints: latency-svc-hfq97 [934.362596ms]
Aug 21 07:25:30.818: INFO: Created: latency-svc-qcz98
Aug 21 07:25:30.832: INFO: Got endpoints: latency-svc-qcz98 [944.831744ms]
Aug 21 07:25:30.859: INFO: Created: latency-svc-r9wcx
Aug 21 07:25:30.893: INFO: Got endpoints: latency-svc-r9wcx [907.277256ms]
Aug 21 07:25:31.516: INFO: Created: latency-svc-ld8sj
Aug 21 07:25:31.521: INFO: Got endpoints: latency-svc-ld8sj [1.477970482s]
Aug 21 07:25:31.585: INFO: Created: latency-svc-mnxz2
Aug 21 07:25:31.595: INFO: Got endpoints: latency-svc-mnxz2 [1.469765049s]
Aug 21 07:25:31.669: INFO: Created: latency-svc-jwxn5
Aug 21 07:25:31.692: INFO: Got endpoints: latency-svc-jwxn5 [1.553157743s]
Aug 21 07:25:31.722: INFO: Created: latency-svc-dxtbr
Aug 21 07:25:31.734: INFO: Got endpoints: latency-svc-dxtbr [1.545803292s]
Aug 21 07:25:31.855: INFO: Created: latency-svc-74fs2
Aug 21 07:25:31.869: INFO: Got endpoints: latency-svc-74fs2 [1.563741453s]
Aug 21 07:25:31.895: INFO: Created: latency-svc-dwg7r
Aug 21 07:25:31.921: INFO: Got endpoints: latency-svc-dwg7r [1.595661597s]
Aug 21 07:25:31.950: INFO: Created: latency-svc-vx948
Aug 21 07:25:32.005: INFO: Got endpoints: latency-svc-vx948 [1.643256237s]
Aug 21 07:25:32.016: INFO: Created: latency-svc-6b8cv
Aug 21 07:25:32.036: INFO: Got endpoints: latency-svc-6b8cv [1.636776975s]
Aug 21 07:25:32.065: INFO: Created: latency-svc-z76mf
Aug 21 07:25:32.089: INFO: Got endpoints: latency-svc-z76mf [1.605498744s]
Aug 21 07:25:32.155: INFO: Created: latency-svc-gmcpb
Aug 21 07:25:32.159: INFO: Got endpoints: latency-svc-gmcpb [1.653322535s]
Aug 21 07:25:32.232: INFO: Created: latency-svc-n8cwd
Aug 21 07:25:32.246: INFO: Got endpoints: latency-svc-n8cwd [1.697174349s]
Aug 21 07:25:32.248: INFO: Latencies: [82.160604ms 166.26896ms 291.615243ms 334.653676ms 416.983073ms 441.182787ms 503.925244ms 585.966333ms 628.125359ms 714.751266ms 743.621322ms 770.177978ms 775.299182ms 775.341643ms 781.734774ms 785.539025ms 786.657701ms 789.637522ms 789.893147ms 793.265779ms 795.202306ms 796.475204ms 797.110606ms 806.676649ms 807.525575ms 808.138871ms 811.930871ms 817.318691ms 817.624338ms 818.952919ms 819.713758ms 822.249987ms 823.191418ms 823.322044ms 825.989119ms 826.775731ms 827.980609ms 829.354257ms 832.435722ms 836.261841ms 836.613652ms 836.904267ms 837.389035ms 840.659623ms 844.203117ms 844.769976ms 845.379699ms 846.102456ms 851.08047ms 851.12489ms 853.167616ms 855.210415ms 856.441662ms 856.844973ms 857.24187ms 860.097988ms 861.563562ms 861.877474ms 862.225057ms 862.641563ms 862.962392ms 863.180583ms 864.696393ms 868.409783ms 868.665976ms 871.11305ms 875.795676ms 877.693892ms 878.343264ms 879.89541ms 881.904332ms 882.152862ms 882.367293ms 886.783346ms 887.310302ms 893.671024ms 895.006962ms 895.798492ms 896.527993ms 898.107886ms 900.108113ms 900.164293ms 906.347018ms 907.277256ms 908.271165ms 911.68267ms 916.788075ms 919.98379ms 920.876649ms 921.539535ms 923.085231ms 928.205119ms 928.959504ms 929.65609ms 933.005241ms 933.817679ms 934.362596ms 940.47116ms 944.831744ms 945.702585ms 956.55821ms 957.202834ms 957.527344ms 962.406749ms 963.063003ms 963.50767ms 968.684341ms 970.174592ms 972.714543ms 974.779462ms 977.75499ms 980.253485ms 984.353253ms 990.301407ms 991.596532ms 992.457872ms 997.777906ms 998.400351ms 998.846303ms 1.00025036s 1.006810405s 1.011406521s 1.014820698s 1.015319685s 1.015881329s 1.023689513s 1.036277329s 1.047584492s 1.048052461s 1.049819004s 1.051899484s 1.052820394s 1.054106883s 1.060189197s 1.06160945s 1.061655108s 1.066120738s 1.06996167s 1.078342201s 1.08041836s 1.086630222s 1.093180677s 1.095153893s 1.102236552s 1.103568868s 1.128613293s 1.138486044s 1.143133212s 1.146230258s 1.154236351s 1.160840087s 1.169163923s 1.180081789s 1.180136305s 1.187801283s 1.204027508s 1.208522944s 1.366707054s 1.371090639s 1.382293614s 1.395570552s 1.400893524s 1.402147005s 1.42938292s 1.429976393s 1.430236164s 1.436223548s 1.444994812s 1.450532195s 1.462805268s 1.469765049s 1.477970482s 1.484177689s 1.488782521s 1.493483637s 1.504248661s 1.516107559s 1.519737656s 1.53187334s 1.545803292s 1.553157743s 1.563741453s 1.595661597s 1.605498744s 1.636776975s 1.643256237s 1.653322535s 1.697174349s 1.758721735s 1.766300643s 1.778982582s 1.784873269s 1.802259631s 1.807500865s 1.834126524s 1.836526672s 1.845544539s 1.845955467s 1.852241138s 1.895983365s]
Aug 21 07:25:32.250: INFO: 50 %ile: 956.55821ms
Aug 21 07:25:32.250: INFO: 90 %ile: 1.553157743s
Aug 21 07:25:32.250: INFO: 99 %ile: 1.852241138s
Aug 21 07:25:32.250: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:25:32.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-3848" for this suite.

• [SLOW TEST:18.175 seconds]
[sig-network] Service endpoints latency
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":275,"completed":270,"skipped":4670,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:25:32.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Aug 21 07:25:32.457: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7322 /api/v1/namespaces/watch-7322/configmaps/e2e-watch-test-watch-closed 9a87b289-f718-4d00-91db-9d9461164865 2039071 0 2020-08-21 07:25:32 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-08-21 07:25:32 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 21 07:25:32.458: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7322 /api/v1/namespaces/watch-7322/configmaps/e2e-watch-test-watch-closed 9a87b289-f718-4d00-91db-9d9461164865 2039072 0 2020-08-21 07:25:32 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-08-21 07:25:32 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Aug 21 07:25:32.474: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7322 /api/v1/namespaces/watch-7322/configmaps/e2e-watch-test-watch-closed 9a87b289-f718-4d00-91db-9d9461164865 2039073 0 2020-08-21 07:25:32 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-08-21 07:25:32 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 21 07:25:32.476: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7322 /api/v1/namespaces/watch-7322/configmaps/e2e-watch-test-watch-closed 9a87b289-f718-4d00-91db-9d9461164865 2039074 0 2020-08-21 07:25:32 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-08-21 07:25:32 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:25:32.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7322" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":271,"skipped":4677,"failed":0}

------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:25:32.486: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-map-2555a330-5303-47c2-a634-d80da14eb5bc
STEP: Creating a pod to test consume secrets
Aug 21 07:25:32.606: INFO: Waiting up to 5m0s for pod "pod-secrets-b37f9b3e-5470-4f44-a96c-9a6f56f8acbd" in namespace "secrets-8198" to be "Succeeded or Failed"
Aug 21 07:25:32.640: INFO: Pod "pod-secrets-b37f9b3e-5470-4f44-a96c-9a6f56f8acbd": Phase="Pending", Reason="", readiness=false. Elapsed: 33.80524ms
Aug 21 07:25:34.742: INFO: Pod "pod-secrets-b37f9b3e-5470-4f44-a96c-9a6f56f8acbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135291621s
Aug 21 07:25:36.749: INFO: Pod "pod-secrets-b37f9b3e-5470-4f44-a96c-9a6f56f8acbd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.142710668s
Aug 21 07:25:38.771: INFO: Pod "pod-secrets-b37f9b3e-5470-4f44-a96c-9a6f56f8acbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.164777392s
STEP: Saw pod success
Aug 21 07:25:38.772: INFO: Pod "pod-secrets-b37f9b3e-5470-4f44-a96c-9a6f56f8acbd" satisfied condition "Succeeded or Failed"
Aug 21 07:25:38.784: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-b37f9b3e-5470-4f44-a96c-9a6f56f8acbd container secret-volume-test: 
STEP: delete the pod
Aug 21 07:25:38.839: INFO: Waiting for pod pod-secrets-b37f9b3e-5470-4f44-a96c-9a6f56f8acbd to disappear
Aug 21 07:25:38.856: INFO: Pod pod-secrets-b37f9b3e-5470-4f44-a96c-9a6f56f8acbd no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:25:38.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8198" for this suite.

• [SLOW TEST:6.472 seconds]
[sig-storage] Secrets
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":272,"skipped":4677,"failed":0}
S
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:25:38.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 07:25:39.162: INFO: The status of Pod test-webserver-b4ef931d-f89c-4be1-9f56-bac6c0bc08ed is Pending, waiting for it to be Running (with Ready = true)
Aug 21 07:25:41.557: INFO: The status of Pod test-webserver-b4ef931d-f89c-4be1-9f56-bac6c0bc08ed is Pending, waiting for it to be Running (with Ready = true)
Aug 21 07:25:43.171: INFO: The status of Pod test-webserver-b4ef931d-f89c-4be1-9f56-bac6c0bc08ed is Pending, waiting for it to be Running (with Ready = true)
Aug 21 07:25:45.169: INFO: The status of Pod test-webserver-b4ef931d-f89c-4be1-9f56-bac6c0bc08ed is Running (Ready = false)
Aug 21 07:25:47.175: INFO: The status of Pod test-webserver-b4ef931d-f89c-4be1-9f56-bac6c0bc08ed is Running (Ready = false)
Aug 21 07:25:49.168: INFO: The status of Pod test-webserver-b4ef931d-f89c-4be1-9f56-bac6c0bc08ed is Running (Ready = false)
Aug 21 07:25:51.167: INFO: The status of Pod test-webserver-b4ef931d-f89c-4be1-9f56-bac6c0bc08ed is Running (Ready = false)
Aug 21 07:25:53.276: INFO: The status of Pod test-webserver-b4ef931d-f89c-4be1-9f56-bac6c0bc08ed is Running (Ready = false)
Aug 21 07:25:55.200: INFO: The status of Pod test-webserver-b4ef931d-f89c-4be1-9f56-bac6c0bc08ed is Running (Ready = false)
Aug 21 07:25:57.172: INFO: The status of Pod test-webserver-b4ef931d-f89c-4be1-9f56-bac6c0bc08ed is Running (Ready = false)
Aug 21 07:25:59.167: INFO: The status of Pod test-webserver-b4ef931d-f89c-4be1-9f56-bac6c0bc08ed is Running (Ready = false)
Aug 21 07:26:01.346: INFO: The status of Pod test-webserver-b4ef931d-f89c-4be1-9f56-bac6c0bc08ed is Running (Ready = true)
Aug 21 07:26:01.358: INFO: Container started at 2020-08-21 07:25:42 +0000 UTC, pod became ready at 2020-08-21 07:26:00 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:26:01.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-921" for this suite.

• [SLOW TEST:22.483 seconds]
[k8s.io] Probing container
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":273,"skipped":4678,"failed":0}
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:26:01.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 21 07:26:05.813: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:26:05.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6375" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":274,"skipped":4678,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 07:26:05.963: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 07:26:06.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 21 07:26:15.770: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4077 create -f -'
Aug 21 07:26:19.915: INFO: stderr: ""
Aug 21 07:26:19.915: INFO: stdout: "e2e-test-crd-publish-openapi-8042-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Aug 21 07:26:19.916: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4077 delete e2e-test-crd-publish-openapi-8042-crds test-cr'
Aug 21 07:26:21.090: INFO: stderr: ""
Aug 21 07:26:21.090: INFO: stdout: "e2e-test-crd-publish-openapi-8042-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Aug 21 07:26:21.091: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4077 apply -f -'
Aug 21 07:26:22.669: INFO: stderr: ""
Aug 21 07:26:22.669: INFO: stdout: "e2e-test-crd-publish-openapi-8042-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Aug 21 07:26:22.670: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4077 delete e2e-test-crd-publish-openapi-8042-crds test-cr'
Aug 21 07:26:23.810: INFO: stderr: ""
Aug 21 07:26:23.810: INFO: stdout: "e2e-test-crd-publish-openapi-8042-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Aug 21 07:26:23.810: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8042-crds'
Aug 21 07:26:25.308: INFO: stderr: ""
Aug 21 07:26:25.308: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8042-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 07:26:43.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4077" for this suite.

• [SLOW TEST:37.996 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":275,"skipped":4686,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSAug 21 07:26:43.962: INFO: Running AfterSuite actions on all nodes
Aug 21 07:26:43.964: INFO: Running AfterSuite actions on node 1
Aug 21 07:26:43.964: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0}

Ran 275 of 4992 Specs in 5555.647 seconds
SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped
PASS