I0514 10:50:28.431323 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0514 10:50:28.431614 7 e2e.go:124] Starting e2e run "1b0ac4e6-8aa7-4483-a338-95453b346736" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589453427 - Will randomize all specs Will run 275 of 4992 specs May 14 10:50:28.495: INFO: >>> kubeConfig: /root/.kube/config May 14 10:50:28.499: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 14 10:50:28.529: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 14 10:50:28.563: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 14 10:50:28.563: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 14 10:50:28.563: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 14 10:50:28.573: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 14 10:50:28.573: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 14 10:50:28.573: INFO: e2e test version: v1.18.2 May 14 10:50:28.574: INFO: kube-apiserver version: v1.18.2 May 14 10:50:28.574: INFO: >>> kubeConfig: /root/.kube/config May 14 10:50:28.578: INFO: Cluster IP family: ipv4 SSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 10:50:28.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api May 14 10:50:28.642: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars May 14 10:50:28.652: INFO: Waiting up to 5m0s for pod "downward-api-f273d51d-e405-4d39-9963-018aa28368ab" in namespace "downward-api-6841" to be "Succeeded or Failed" May 14 10:50:28.718: INFO: Pod "downward-api-f273d51d-e405-4d39-9963-018aa28368ab": Phase="Pending", Reason="", readiness=false. Elapsed: 66.527269ms May 14 10:50:30.723: INFO: Pod "downward-api-f273d51d-e405-4d39-9963-018aa28368ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070932079s May 14 10:50:32.726: INFO: Pod "downward-api-f273d51d-e405-4d39-9963-018aa28368ab": Phase="Running", Reason="", readiness=true. Elapsed: 4.074012451s May 14 10:50:34.729: INFO: Pod "downward-api-f273d51d-e405-4d39-9963-018aa28368ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.077312223s STEP: Saw pod success May 14 10:50:34.729: INFO: Pod "downward-api-f273d51d-e405-4d39-9963-018aa28368ab" satisfied condition "Succeeded or Failed" May 14 10:50:34.731: INFO: Trying to get logs from node kali-worker2 pod downward-api-f273d51d-e405-4d39-9963-018aa28368ab container dapi-container: STEP: delete the pod May 14 10:50:34.790: INFO: Waiting for pod downward-api-f273d51d-e405-4d39-9963-018aa28368ab to disappear May 14 10:50:34.891: INFO: Pod downward-api-f273d51d-e405-4d39-9963-018aa28368ab no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 10:50:34.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6841" for this suite. • [SLOW TEST:6.321 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":1,"skipped":4,"failed":0} S ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 10:50:34.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service nodeport-service with the type=NodePort in namespace services-7779 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-7779 STEP: creating replication controller externalsvc in namespace services-7779 I0514 10:50:35.401997 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-7779, replica count: 2 I0514 10:50:38.452443 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 10:50:41.452725 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 14 10:50:41.552: INFO: Creating new exec pod May 14 10:50:45.569: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-7779 execpod749hv -- /bin/sh -x -c nslookup nodeport-service' May 14 10:50:48.755: INFO: stderr: "I0514 10:50:48.514319 28 log.go:172] (0xc000952d10) (0xc000930280) Create stream\nI0514 10:50:48.514367 28 log.go:172] (0xc000952d10) (0xc000930280) Stream added, broadcasting: 1\nI0514 10:50:48.516910 28 log.go:172] (0xc000952d10) Reply frame received for 1\nI0514 10:50:48.516935 28 log.go:172] (0xc000952d10) (0xc000930320) Create stream\nI0514 10:50:48.516942 28 log.go:172] (0xc000952d10) (0xc000930320) Stream added, broadcasting: 3\nI0514 10:50:48.517674 28 log.go:172] (0xc000952d10) Reply frame received for 3\nI0514 10:50:48.517707 28 log.go:172] (0xc000952d10) (0xc0008ba0a0) Create stream\nI0514 10:50:48.517721 28 log.go:172] (0xc000952d10) (0xc0008ba0a0) Stream added, broadcasting: 5\nI0514 10:50:48.518446 28 log.go:172] (0xc000952d10) Reply frame received for 5\nI0514 10:50:48.649356 28 log.go:172] (0xc000952d10) Data frame received for 5\nI0514 10:50:48.649439 28 log.go:172] (0xc0008ba0a0) (5) Data frame handling\nI0514 10:50:48.649457 28 log.go:172] (0xc0008ba0a0) (5) Data frame sent\n+ nslookup nodeport-service\nI0514 10:50:48.741687 28 log.go:172] (0xc000952d10) Data frame received for 3\nI0514 10:50:48.741722 28 log.go:172] (0xc000930320) (3) Data frame handling\nI0514 10:50:48.741747 28 log.go:172] (0xc000930320) (3) Data frame sent\nI0514 10:50:48.742831 28 log.go:172] (0xc000952d10) Data frame received for 3\nI0514 10:50:48.742866 28 log.go:172] (0xc000930320) (3) Data frame handling\nI0514 10:50:48.742910 28 log.go:172] (0xc000930320) (3) Data frame sent\nI0514 10:50:48.743359 28 log.go:172] (0xc000952d10) Data frame received for 5\nI0514 10:50:48.743387 28 log.go:172] (0xc0008ba0a0) (5) Data frame handling\nI0514 10:50:48.743508 28 log.go:172] (0xc000952d10) Data frame received for 3\nI0514 10:50:48.743537 28 log.go:172] (0xc000930320) (3) Data frame handling\nI0514 10:50:48.745940 28 log.go:172] (0xc000952d10) Data frame received for 1\nI0514 10:50:48.745979 28 log.go:172] (0xc000930280) (1) Data frame handling\nI0514 10:50:48.746169 28 log.go:172] (0xc000930280) (1) Data frame sent\nI0514 10:50:48.746219 28 log.go:172] (0xc000952d10) (0xc000930280) Stream removed, broadcasting: 1\nI0514 10:50:48.746294 28 log.go:172] (0xc000952d10) Go away received\nI0514 10:50:48.746798 28 log.go:172] (0xc000952d10) (0xc000930280) Stream removed, broadcasting: 1\nI0514 10:50:48.746911 28 log.go:172] (0xc000952d10) (0xc000930320) Stream removed, broadcasting: 3\nI0514 10:50:48.746941 28 log.go:172] (0xc000952d10) (0xc0008ba0a0) Stream removed, broadcasting: 5\n" May 14 10:50:48.755: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-7779.svc.cluster.local\tcanonical name = externalsvc.services-7779.svc.cluster.local.\nName:\texternalsvc.services-7779.svc.cluster.local\nAddress: 10.108.218.245\n\n" STEP: deleting ReplicationController externalsvc in namespace services-7779, will wait for the garbage collector to delete the pods May 14 10:50:48.816: INFO: Deleting ReplicationController externalsvc took: 6.743802ms May 14 10:50:49.116: INFO: Terminating ReplicationController externalsvc pods took: 300.27381ms May 14 10:51:04.108: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 10:51:04.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7779" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:29.268 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":2,"skipped":5,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 10:51:04.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 10:51:36.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2982" for this suite. STEP: Destroying namespace "nsdeletetest-365" for this suite. May 14 10:51:36.791: INFO: Namespace nsdeletetest-365 was already deleted STEP: Destroying namespace "nsdeletetest-3234" for this suite. • [SLOW TEST:32.655 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":3,"skipped":35,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 10:51:36.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-8875 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-8875 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8875 May 14 10:51:36.920: INFO: Found 0 stateful pods, waiting for 1 May 14 10:51:46.925: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 14 10:51:46.928: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8875 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 14 10:51:47.192: INFO: stderr: "I0514 10:51:47.057505 62 log.go:172] (0xc0009df290) (0xc000c5c640) Create stream\nI0514 10:51:47.057572 62 log.go:172] (0xc0009df290) (0xc000c5c640) Stream added, broadcasting: 1\nI0514 10:51:47.059944 62 log.go:172] (0xc0009df290) Reply frame received for 1\nI0514 10:51:47.059982 62 log.go:172] (0xc0009df290) (0xc000c5c6e0) Create stream\nI0514 10:51:47.059993 62 log.go:172] (0xc0009df290) (0xc000c5c6e0) Stream added, broadcasting: 3\nI0514 10:51:47.060769 62 log.go:172] (0xc0009df290) Reply frame received for 3\nI0514 10:51:47.060799 62 log.go:172] (0xc0009df290) (0xc000c5c780) Create stream\nI0514 10:51:47.060806 62 log.go:172] (0xc0009df290) (0xc000c5c780) Stream added, broadcasting: 5\nI0514 10:51:47.062310 62 log.go:172] (0xc0009df290) Reply frame received for 5\nI0514 10:51:47.138908 62 log.go:172] (0xc0009df290) Data frame received for 5\nI0514 10:51:47.138933 62 log.go:172] (0xc000c5c780) (5) Data frame handling\nI0514 10:51:47.138949 62 log.go:172] (0xc000c5c780) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0514 10:51:47.180954 62 log.go:172] (0xc0009df290) Data frame received for 3\nI0514 10:51:47.180989 62 log.go:172] (0xc000c5c6e0) (3) Data frame handling\nI0514 10:51:47.181017 62 log.go:172] (0xc000c5c6e0) (3) Data frame sent\nI0514 10:51:47.181063 62 log.go:172] (0xc0009df290) Data frame received for 5\nI0514 10:51:47.181107 62 log.go:172] (0xc000c5c780) (5) Data frame handling\nI0514 10:51:47.181649 62 log.go:172] (0xc0009df290) Data frame received for 3\nI0514 10:51:47.181662 62 log.go:172] (0xc000c5c6e0) (3) Data frame handling\nI0514 10:51:47.183610 62 log.go:172] (0xc0009df290) Data frame received for 1\nI0514 10:51:47.183628 62 log.go:172] (0xc000c5c640) (1) Data frame handling\nI0514 10:51:47.183638 62 log.go:172] (0xc000c5c640) (1) Data frame sent\nI0514 10:51:47.183650 62 log.go:172] (0xc0009df290) (0xc000c5c640) Stream removed, broadcasting: 1\nI0514 10:51:47.183664 62 log.go:172] (0xc0009df290) Go away received\nI0514 10:51:47.184135 62 log.go:172] (0xc0009df290) (0xc000c5c640) Stream removed, broadcasting: 1\nI0514 10:51:47.184153 62 log.go:172] (0xc0009df290) (0xc000c5c6e0) Stream removed, broadcasting: 3\nI0514 10:51:47.184163 62 log.go:172] (0xc0009df290) (0xc000c5c780) Stream removed, broadcasting: 5\n" May 14 10:51:47.192: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 14 10:51:47.192: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 14 10:51:47.223: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 14 10:51:47.223: INFO: Waiting for statefulset status.replicas updated to 0 May 14 10:51:47.316: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999289s May 14 10:51:48.320: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.981742323s May 14 10:51:49.324: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.977727602s May 14 10:51:50.328: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.973747168s May 14 10:51:51.332: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.969386451s May 14 10:51:52.380: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.965766343s May 14 10:51:53.498: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.917330342s May 14 10:51:54.504: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.799072351s May 14 10:51:55.508: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.793853938s May 14 10:51:56.514: INFO: Verifying statefulset ss doesn't scale past 1 for another 789.409653ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8875 May 14 10:51:57.518: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 10:51:57.730: INFO: stderr: "I0514 10:51:57.642189 81 log.go:172] (0xc00003afd0) (0xc0008fc280) Create stream\nI0514 10:51:57.642243 81 log.go:172] (0xc00003afd0) (0xc0008fc280) Stream added, broadcasting: 1\nI0514 10:51:57.644673 81 log.go:172] (0xc00003afd0) Reply frame received for 1\nI0514 10:51:57.644727 81 log.go:172] (0xc00003afd0) (0xc000400960) Create stream\nI0514 10:51:57.644744 81 log.go:172] (0xc00003afd0) (0xc000400960) Stream added, broadcasting: 3\nI0514 10:51:57.645688 81 log.go:172] (0xc00003afd0) Reply frame received for 3\nI0514 10:51:57.645729 81 log.go:172] (0xc00003afd0) (0xc0006b5180) Create stream\nI0514 10:51:57.645740 81 log.go:172] (0xc00003afd0) (0xc0006b5180) Stream added, broadcasting: 5\nI0514 10:51:57.646479 81 log.go:172] (0xc00003afd0) Reply frame received for 5\nI0514 10:51:57.717715 81 log.go:172] (0xc00003afd0) Data frame received for 5\nI0514 10:51:57.717741 81 log.go:172] (0xc0006b5180) (5) Data frame handling\nI0514 10:51:57.717754 81 log.go:172] (0xc0006b5180) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0514 10:51:57.723514 81 log.go:172] (0xc00003afd0) Data frame received for 3\nI0514 10:51:57.723540 81 log.go:172] (0xc000400960) (3) Data frame handling\nI0514 10:51:57.723561 81 log.go:172] (0xc000400960) (3) Data frame sent\nI0514 10:51:57.723840 81 log.go:172] (0xc00003afd0) Data frame received for 3\nI0514 10:51:57.723862 81 log.go:172] (0xc000400960) (3) Data frame handling\nI0514 10:51:57.723885 81 log.go:172] (0xc00003afd0) Data frame received for 5\nI0514 10:51:57.723907 81 log.go:172] (0xc0006b5180) (5) Data frame handling\nI0514 10:51:57.725428 81 log.go:172] (0xc00003afd0) Data frame received for 1\nI0514 10:51:57.725454 81 log.go:172] (0xc0008fc280) (1) Data frame handling\nI0514 10:51:57.725499 81 log.go:172] (0xc0008fc280) (1) Data frame sent\nI0514 10:51:57.725514 81 log.go:172] (0xc00003afd0) (0xc0008fc280) Stream removed, broadcasting: 1\nI0514 10:51:57.725527 81 log.go:172] (0xc00003afd0) Go away received\nI0514 10:51:57.725834 81 log.go:172] (0xc00003afd0) (0xc0008fc280) Stream removed, broadcasting: 1\nI0514 10:51:57.725849 81 log.go:172] (0xc00003afd0) (0xc000400960) Stream removed, broadcasting: 3\nI0514 10:51:57.725855 81 log.go:172] (0xc00003afd0) (0xc0006b5180) Stream removed, broadcasting: 5\n" May 14 10:51:57.730: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 14 10:51:57.730: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 14 10:51:57.744: INFO: Found 1 stateful pods, waiting for 3 May 14 10:52:07.749: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 14 10:52:07.749: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 14 10:52:07.749: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 14 10:52:07.756: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8875 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 14 10:52:07.972: INFO: stderr: "I0514 10:52:07.889588 100 log.go:172] (0xc0009502c0) (0xc0008014a0) Create stream\nI0514 10:52:07.889640 100 log.go:172] (0xc0009502c0) (0xc0008014a0) Stream added, broadcasting: 1\nI0514 10:52:07.893539 100 log.go:172] (0xc0009502c0) Reply frame received for 1\nI0514 10:52:07.893573 100 log.go:172] (0xc0009502c0) (0xc000801540) Create stream\nI0514 10:52:07.893584 100 log.go:172] (0xc0009502c0) (0xc000801540) Stream added, broadcasting: 3\nI0514 10:52:07.894575 100 log.go:172] (0xc0009502c0) Reply frame received for 3\nI0514 10:52:07.894615 100 log.go:172] (0xc0009502c0) (0xc0008015e0) Create stream\nI0514 10:52:07.894632 100 log.go:172] (0xc0009502c0) (0xc0008015e0) Stream added, broadcasting: 5\nI0514 10:52:07.895641 100 log.go:172] (0xc0009502c0) Reply frame received for 5\nI0514 10:52:07.967285 100 log.go:172] (0xc0009502c0) Data frame received for 3\nI0514 10:52:07.967305 100 log.go:172] (0xc000801540) (3) Data frame handling\nI0514 10:52:07.967312 100 log.go:172] (0xc000801540) (3) Data frame sent\nI0514 10:52:07.967317 100 log.go:172] (0xc0009502c0) Data frame received for 3\nI0514 10:52:07.967330 100 log.go:172] (0xc0009502c0) Data frame received for 5\nI0514 10:52:07.967344 100 log.go:172] (0xc0008015e0) (5) Data frame handling\nI0514 10:52:07.967352 100 log.go:172] (0xc0008015e0) (5) Data frame sent\nI0514 10:52:07.967358 100 log.go:172] (0xc0009502c0) Data frame received for 5\nI0514 10:52:07.967365 100 log.go:172] (0xc0008015e0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0514 10:52:07.967397 100 log.go:172] (0xc000801540) (3) Data frame handling\nI0514 10:52:07.968272 100 log.go:172] (0xc0009502c0) Data frame received for 1\nI0514 10:52:07.968289 100 log.go:172] (0xc0008014a0) (1) Data frame handling\nI0514 10:52:07.968297 100 log.go:172] (0xc0008014a0) (1) Data frame sent\nI0514 10:52:07.968305 100 log.go:172] (0xc0009502c0) (0xc0008014a0) Stream removed, broadcasting: 1\nI0514 10:52:07.968332 100 log.go:172] (0xc0009502c0) Go away received\nI0514 10:52:07.968588 100 log.go:172] (0xc0009502c0) (0xc0008014a0) Stream removed, broadcasting: 1\nI0514 10:52:07.968600 100 log.go:172] (0xc0009502c0) (0xc000801540) Stream removed, broadcasting: 3\nI0514 10:52:07.968606 100 log.go:172] (0xc0009502c0) (0xc0008015e0) Stream removed, broadcasting: 5\n" May 14 10:52:07.972: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 14 10:52:07.972: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 14 10:52:07.972: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8875 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 14 10:52:08.205: INFO: stderr: "I0514 10:52:08.084332 122 log.go:172] (0xc00003b600) (0xc000669540) Create stream\nI0514 10:52:08.084395 122 log.go:172] (0xc00003b600) (0xc000669540) Stream added, broadcasting: 1\nI0514 10:52:08.088790 122 log.go:172] (0xc00003b600) Reply frame received for 1\nI0514 10:52:08.088863 122 log.go:172] (0xc00003b600) (0xc0008e6000) Create stream\nI0514 10:52:08.088885 122 log.go:172] (0xc00003b600) (0xc0008e6000) Stream added, broadcasting: 3\nI0514 10:52:08.090241 122 log.go:172] (0xc00003b600) Reply frame received for 3\nI0514 10:52:08.090278 122 log.go:172] (0xc00003b600) (0xc0006695e0) Create stream\nI0514 10:52:08.090289 122 log.go:172] (0xc00003b600) (0xc0006695e0) Stream added, broadcasting: 5\nI0514 10:52:08.091070 122 log.go:172] (0xc00003b600) Reply frame received for 5\nI0514 10:52:08.153648 122 log.go:172] (0xc00003b600) Data frame received for 5\nI0514 10:52:08.153678 122 log.go:172] (0xc0006695e0) (5) Data frame handling\nI0514 10:52:08.153698 122 log.go:172] (0xc0006695e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0514 10:52:08.197690 122 log.go:172] (0xc00003b600) Data frame received for 5\nI0514 10:52:08.197723 122 log.go:172] (0xc0006695e0) (5) Data frame handling\nI0514 10:52:08.197768 122 log.go:172] (0xc00003b600) Data frame received for 3\nI0514 10:52:08.197801 122 log.go:172] (0xc0008e6000) (3) Data frame handling\nI0514 10:52:08.197836 122 log.go:172] (0xc0008e6000) (3) Data frame sent\nI0514 10:52:08.197850 122 log.go:172] (0xc00003b600) Data frame received for 3\nI0514 10:52:08.197859 122 log.go:172] (0xc0008e6000) (3) Data frame handling\nI0514 10:52:08.199919 122 log.go:172] (0xc00003b600) Data frame received for 1\nI0514 10:52:08.199963 122 log.go:172] (0xc000669540) (1) Data frame handling\nI0514 10:52:08.200011 122 log.go:172] (0xc000669540) (1) Data frame sent\nI0514 10:52:08.200044 122 log.go:172] (0xc00003b600) (0xc000669540) Stream removed, broadcasting: 1\nI0514 10:52:08.200070 122 log.go:172] (0xc00003b600) Go away received\nI0514 10:52:08.200483 122 log.go:172] (0xc00003b600) (0xc000669540) Stream removed, broadcasting: 1\nI0514 10:52:08.200501 122 log.go:172] (0xc00003b600) (0xc0008e6000) Stream removed, broadcasting: 3\nI0514 10:52:08.200510 122 log.go:172] (0xc00003b600) (0xc0006695e0) Stream removed, broadcasting: 5\n" May 14 10:52:08.205: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 14 10:52:08.205: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 14 10:52:08.205: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8875 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 14 10:52:08.495: INFO: stderr: "I0514 10:52:08.345764 143 log.go:172] (0xc000ab80b0) (0xc00028c1e0) Create stream\nI0514 10:52:08.345815 143 log.go:172] (0xc000ab80b0) (0xc00028c1e0) Stream added, broadcasting: 1\nI0514 10:52:08.348331 143 log.go:172] (0xc000ab80b0) Reply frame received for 1\nI0514 10:52:08.348386 143 log.go:172] (0xc000ab80b0) (0xc0007432c0) Create stream\nI0514 10:52:08.348410 143 log.go:172] (0xc000ab80b0) (0xc0007432c0) Stream added, broadcasting: 3\nI0514 10:52:08.349418 143 log.go:172] (0xc000ab80b0) Reply frame received for 3\nI0514 10:52:08.349478 143 log.go:172] (0xc000ab80b0) (0xc000858000) Create stream\nI0514 10:52:08.349507 143 log.go:172] (0xc000ab80b0) (0xc000858000) Stream added, broadcasting: 5\nI0514 10:52:08.350489 143 log.go:172] (0xc000ab80b0) Reply frame received for 5\nI0514 10:52:08.421809 143 log.go:172] (0xc000ab80b0) Data frame received for 5\nI0514 10:52:08.421842 143 log.go:172] (0xc000858000) (5) Data frame handling\nI0514 10:52:08.421856 143 log.go:172] (0xc000858000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0514 10:52:08.485756 143 log.go:172] (0xc000ab80b0) Data frame received for 3\nI0514 10:52:08.485808 143 log.go:172] (0xc0007432c0) (3) Data frame handling\nI0514 10:52:08.485843 143 log.go:172] (0xc0007432c0) (3) Data frame sent\nI0514 10:52:08.486036 143 log.go:172] (0xc000ab80b0) Data frame received for 5\nI0514 10:52:08.486055 143 log.go:172] (0xc000858000) (5) Data frame handling\nI0514 10:52:08.486075 143 log.go:172] (0xc000ab80b0) Data frame received for 3\nI0514 10:52:08.486100 143 log.go:172] (0xc0007432c0) (3) Data frame handling\nI0514 10:52:08.487692 143 log.go:172] (0xc000ab80b0) Data frame received for 1\nI0514 10:52:08.487713 143 log.go:172] (0xc00028c1e0) (1) Data frame handling\nI0514 10:52:08.487720 143 log.go:172] (0xc00028c1e0) (1) Data frame sent\nI0514 10:52:08.487728 143 log.go:172] (0xc000ab80b0) (0xc00028c1e0) Stream removed, broadcasting: 1\nI0514 10:52:08.487766 143 log.go:172] (0xc000ab80b0) Go away received\nI0514 10:52:08.488006 143 log.go:172] (0xc000ab80b0) (0xc00028c1e0) Stream removed, broadcasting: 1\nI0514 10:52:08.488024 143 log.go:172] (0xc000ab80b0) (0xc0007432c0) Stream removed, broadcasting: 3\nI0514 10:52:08.488033 143 log.go:172] (0xc000ab80b0) (0xc000858000) Stream removed, broadcasting: 5\n" May 14 10:52:08.495: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 14 10:52:08.495: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 14 10:52:08.495: INFO: Waiting for statefulset status.replicas updated to 0 May 14 10:52:08.498: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 14 10:52:18.617: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 14 10:52:18.617: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 14 10:52:18.617: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 14 10:52:18.684: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999469s May 14 10:52:19.690: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.958957967s May 14 10:52:20.695: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.953451539s May 14 10:52:21.701: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.947961762s May 14 10:52:22.705: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.94194056s May 14 10:52:23.709: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.937976333s May 14 10:52:24.712: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.934270765s May 14 10:52:25.717: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.930583813s May 14 10:52:26.725: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.92659248s May 14 10:52:27.730: INFO: Verifying statefulset ss doesn't scale past 3 for another 918.419226ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8875 May 14 10:52:28.748: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 10:52:28.944: INFO: stderr: "I0514 10:52:28.853802 163 log.go:172] (0xc0000e8fd0) (0xc0005b3540) Create stream\nI0514 10:52:28.853847 163 log.go:172] (0xc0000e8fd0) (0xc0005b3540) Stream added, broadcasting: 1\nI0514 10:52:28.855406 163 log.go:172] (0xc0000e8fd0) Reply frame received for 1\nI0514 10:52:28.855426 163 log.go:172] (0xc0000e8fd0) (0xc0005b35e0) Create stream\nI0514 10:52:28.855448 163 log.go:172] (0xc0000e8fd0) (0xc0005b35e0) Stream added, broadcasting: 3\nI0514 10:52:28.856047 163 log.go:172] (0xc0000e8fd0) Reply frame received for 3\nI0514 10:52:28.856067 163 log.go:172] (0xc0000e8fd0) (0xc000998000) Create stream\nI0514 10:52:28.856083 163 log.go:172] (0xc0000e8fd0) (0xc000998000) Stream added, broadcasting: 5\nI0514 10:52:28.856706 163 log.go:172] (0xc0000e8fd0) Reply frame received for 5\nI0514 10:52:28.938436 163 log.go:172] (0xc0000e8fd0) Data frame received for 5\nI0514 10:52:28.938465 163 log.go:172] (0xc000998000) (5) Data frame handling\nI0514 10:52:28.938487 163 log.go:172] (0xc000998000) (5) Data frame sent\nI0514 10:52:28.938498 163 log.go:172] (0xc0000e8fd0) Data frame received for 5\nI0514 10:52:28.938508 163 log.go:172] (0xc000998000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0514 10:52:28.938523 163 log.go:172] (0xc0000e8fd0) Data frame received for 3\nI0514 10:52:28.938577 163 log.go:172] (0xc0005b35e0) (3) Data frame handling\nI0514 10:52:28.938615 163 log.go:172] (0xc0005b35e0) (3) Data frame sent\nI0514 10:52:28.938630 163 log.go:172] (0xc0000e8fd0) Data frame received for 3\nI0514 10:52:28.938645 163 log.go:172] (0xc0005b35e0) (3) Data frame handling\nI0514 10:52:28.939855 163 log.go:172] (0xc0000e8fd0) Data frame received for 1\nI0514 10:52:28.939888 163 log.go:172] (0xc0005b3540) (1) Data frame handling\nI0514 10:52:28.939923 163 log.go:172] (0xc0005b3540) (1) Data frame sent\nI0514 10:52:28.939955 163 log.go:172] (0xc0000e8fd0) (0xc0005b3540) Stream removed, broadcasting: 1\nI0514 10:52:28.940001 163 log.go:172] (0xc0000e8fd0) Go away received\nI0514 10:52:28.940330 163 log.go:172] (0xc0000e8fd0) (0xc0005b3540) Stream removed, broadcasting: 1\nI0514 10:52:28.940344 163 log.go:172] (0xc0000e8fd0) (0xc0005b35e0) Stream removed, broadcasting: 3\nI0514 10:52:28.940352 163 log.go:172] (0xc0000e8fd0) (0xc000998000) Stream removed, broadcasting: 5\n" May 14 10:52:28.944: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 14 10:52:28.944: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 14 10:52:28.944: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8875 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 10:52:29.139: INFO: stderr: "I0514 10:52:29.071214 184 log.go:172] (0xc00003bef0) (0xc0009028c0) Create stream\nI0514 10:52:29.071257 184 log.go:172] (0xc00003bef0) (0xc0009028c0) Stream added, broadcasting: 1\nI0514 10:52:29.075703 184 log.go:172] (0xc00003bef0) Reply frame received for 1\nI0514 10:52:29.075738 184 log.go:172] (0xc00003bef0) (0xc00065b5e0) Create stream\nI0514 10:52:29.075754 184 log.go:172] (0xc00003bef0) (0xc00065b5e0) Stream added, broadcasting: 3\nI0514 10:52:29.076710 184 log.go:172] (0xc00003bef0) Reply frame received for 3\nI0514 10:52:29.076748 184 log.go:172] (0xc00003bef0) (0xc000526a00) Create stream\nI0514 10:52:29.076760 184 log.go:172] (0xc00003bef0) (0xc000526a00) Stream added, broadcasting: 5\nI0514 10:52:29.078025 184 log.go:172] (0xc00003bef0) Reply frame received for 5\nI0514 10:52:29.134617 184 log.go:172] (0xc00003bef0) Data frame received for 3\nI0514 10:52:29.134664 184 log.go:172] (0xc00065b5e0) (3) Data frame handling\nI0514 10:52:29.134682 184 log.go:172] (0xc00065b5e0) (3) Data frame sent\nI0514 10:52:29.134696 184 log.go:172] (0xc00003bef0) Data frame received for 3\nI0514 10:52:29.134705 184 log.go:172] (0xc00065b5e0) (3) Data frame handling\nI0514 10:52:29.134753 184 log.go:172] (0xc00003bef0) Data frame received for 5\nI0514 10:52:29.134776 184 log.go:172] (0xc000526a00) (5) Data frame handling\nI0514 10:52:29.134795 184 log.go:172] (0xc000526a00) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0514 10:52:29.134825 184 log.go:172] (0xc00003bef0) Data frame received for 5\nI0514 10:52:29.134844 184 log.go:172] (0xc000526a00) (5) Data frame handling\nI0514 10:52:29.135911 184 log.go:172] (0xc00003bef0) Data frame received for 1\nI0514 10:52:29.135928 184 log.go:172] (0xc0009028c0) (1) Data frame handling\nI0514 10:52:29.135936 184 log.go:172] (0xc0009028c0) (1) Data frame sent\nI0514 10:52:29.135945 184 log.go:172] (0xc00003bef0) (0xc0009028c0) Stream removed, broadcasting: 1\nI0514 10:52:29.136027 184 log.go:172] (0xc00003bef0) Go away received\nI0514 10:52:29.136198 184 log.go:172] (0xc00003bef0) (0xc0009028c0) Stream removed, broadcasting: 1\nI0514 10:52:29.136210 184 log.go:172] (0xc00003bef0) (0xc00065b5e0) Stream removed, broadcasting: 3\nI0514 10:52:29.136217 184 log.go:172] (0xc00003bef0) (0xc000526a00) Stream removed, broadcasting: 5\n" May 14 10:52:29.139: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 14 10:52:29.139: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 14 10:52:29.139: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 10:52:29.360: INFO: stderr: "I0514 10:52:29.297546 204 log.go:172] (0xc000942000) (0xc000918140) Create stream\nI0514 10:52:29.297631 204 log.go:172] (0xc000942000) (0xc000918140) Stream added, broadcasting: 1\nI0514 10:52:29.299537 204 log.go:172] (0xc000942000) Reply frame received for 1\nI0514 10:52:29.299558 204 log.go:172] (0xc000942000) (0xc00064c000) Create stream\nI0514 10:52:29.299563 204 log.go:172] (0xc000942000) (0xc00064c000) Stream added, broadcasting: 3\nI0514 10:52:29.300151 204 log.go:172] (0xc000942000) Reply frame received for 3\nI0514 10:52:29.300180 204 log.go:172] (0xc000942000) (0xc0009181e0) Create stream\nI0514 10:52:29.300193 204 log.go:172] (0xc000942000) (0xc0009181e0) Stream added, broadcasting: 5\nI0514 10:52:29.300970 204 log.go:172] (0xc000942000) Reply frame received for 5\nI0514 10:52:29.355764 204 log.go:172] (0xc000942000) Data frame received for 5\nI0514 10:52:29.355794 204 log.go:172] (0xc0009181e0) (5) Data frame handling\nI0514 10:52:29.355804 204 log.go:172] (0xc0009181e0) (5) Data frame sent\nI0514 10:52:29.355821 204 log.go:172] (0xc000942000) Data frame received for 5\nI0514 10:52:29.355831 204 log.go:172] (0xc0009181e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0514 10:52:29.355849 204 log.go:172] (0xc000942000) Data frame received for 3\nI0514 10:52:29.355856 204 log.go:172] (0xc00064c000) (3) Data frame handling\nI0514 10:52:29.355868 204 log.go:172] (0xc00064c000) (3) Data frame sent\nI0514 10:52:29.355876 204 log.go:172] (0xc000942000) Data frame received for 3\nI0514 10:52:29.355883 204 log.go:172] (0xc00064c000) (3) Data frame handling\nI0514 10:52:29.356925 204 log.go:172] (0xc000942000) Data frame received for 1\nI0514 10:52:29.356938 204 log.go:172] (0xc000918140) (1) Data frame handling\nI0514 10:52:29.356947 204 log.go:172] (0xc000918140) (1) Data frame sent\nI0514 10:52:29.356953 204 log.go:172] (0xc000942000) (0xc000918140) Stream removed, broadcasting: 1\nI0514 10:52:29.356967 204 log.go:172] (0xc000942000) Go away received\nI0514 10:52:29.357464 204 log.go:172] (0xc000942000) (0xc000918140) Stream removed, broadcasting: 1\nI0514 10:52:29.357482 204 log.go:172] (0xc000942000) (0xc00064c000) Stream removed, broadcasting: 3\nI0514 10:52:29.357493 204 log.go:172] (0xc000942000) (0xc0009181e0) Stream removed, broadcasting: 5\n" May 14 10:52:29.360: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 14 10:52:29.360: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 14 10:52:29.360: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 May 14 10:53:09.465: INFO: Deleting all statefulset in ns statefulset-8875 May 14 10:53:09.468: INFO: Scaling statefulset ss to 0 May 14 10:53:09.476: INFO: Waiting for statefulset status.replicas updated to 0 May 14 10:53:09.478: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 10:53:09.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8875" for this suite. • [SLOW TEST:92.692 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":4,"skipped":77,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 10:53:09.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-4969/configmap-test-c2650c51-41c8-4744-b15a-be0859ea9934 STEP: Creating a pod to test consume configMaps May 14 10:53:09.734: INFO: Waiting up to 5m0s for pod "pod-configmaps-ec75e8a1-e0ee-4b52-8cda-32d11733c630" in namespace "configmap-4969" to be "Succeeded or Failed" May 14 10:53:09.804: INFO: Pod "pod-configmaps-ec75e8a1-e0ee-4b52-8cda-32d11733c630": Phase="Pending", Reason="", readiness=false. Elapsed: 69.619163ms May 14 10:53:11.808: INFO: Pod "pod-configmaps-ec75e8a1-e0ee-4b52-8cda-32d11733c630": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073105272s May 14 10:53:13.840: INFO: Pod "pod-configmaps-ec75e8a1-e0ee-4b52-8cda-32d11733c630": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105399131s May 14 10:53:15.845: INFO: Pod "pod-configmaps-ec75e8a1-e0ee-4b52-8cda-32d11733c630": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.111005861s STEP: Saw pod success May 14 10:53:15.845: INFO: Pod "pod-configmaps-ec75e8a1-e0ee-4b52-8cda-32d11733c630" satisfied condition "Succeeded or Failed" May 14 10:53:15.849: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-ec75e8a1-e0ee-4b52-8cda-32d11733c630 container env-test: STEP: delete the pod May 14 10:53:15.875: INFO: Waiting for pod pod-configmaps-ec75e8a1-e0ee-4b52-8cda-32d11733c630 to disappear May 14 10:53:15.912: INFO: Pod pod-configmaps-ec75e8a1-e0ee-4b52-8cda-32d11733c630 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 10:53:15.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4969" for this suite. • [SLOW TEST:6.437 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":5,"skipped":93,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 10:53:15.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 10:53:32.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9376" for this suite. • [SLOW TEST:16.214 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":6,"skipped":95,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 10:53:32.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod May 14 10:53:32.203: INFO: PodSpec: initContainers in spec.initContainers May 14 10:54:22.769: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-71cd0314-d29d-46d3-a029-1b2865e5ff01", GenerateName:"", Namespace:"init-container-2044", SelfLink:"/api/v1/namespaces/init-container-2044/pods/pod-init-71cd0314-d29d-46d3-a029-1b2865e5ff01", UID:"7d4e9750-6ba7-4577-816e-e2649c398450", ResourceVersion:"4264429", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725050412, loc:(*time.Location)(0x7b200c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"203236313"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00211d740), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00211d760)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00211d780), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00211d7e0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-8tfx6", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00121a0c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8tfx6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8tfx6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8tfx6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0015ee3c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kali-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0029938f0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0015ee450)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0015ee470)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0015ee478), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0015ee47c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725050412, loc:(*time.Location)(0x7b200c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725050412, loc:(*time.Location)(0x7b200c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725050412, loc:(*time.Location)(0x7b200c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725050412, loc:(*time.Location)(0x7b200c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.18", PodIP:"10.244.1.197", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.197"}}, StartTime:(*v1.Time)(0xc00211d860), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0029939d0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002993a40)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://147a261ddd9b15ae2697d919b14266b13d9c31a93dfb8b3d317d9b0606769dfe", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00211d8c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00211d8a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc0015ee4ff)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 10:54:22.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2044" for this suite. • [SLOW TEST:50.609 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":7,"skipped":106,"failed":0} SS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 10:54:22.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating server pod server in namespace prestop-2898 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-2898 STEP: Deleting pre-stop pod May 14 10:54:36.088: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 10:54:36.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-2898" for this suite. • [SLOW TEST:13.453 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":275,"completed":8,"skipped":108,"failed":0} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 10:54:36.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-mz2s STEP: Creating a pod to test atomic-volume-subpath May 14 10:54:36.930: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-mz2s" in namespace "subpath-7910" to be "Succeeded or Failed" May 14 10:54:36.934: INFO: Pod "pod-subpath-test-configmap-mz2s": Phase="Pending", Reason="", readiness=false. Elapsed: 3.444148ms May 14 10:54:38.967: INFO: Pod "pod-subpath-test-configmap-mz2s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036100087s May 14 10:54:40.971: INFO: Pod "pod-subpath-test-configmap-mz2s": Phase="Running", Reason="", readiness=true. Elapsed: 4.040161221s May 14 10:54:42.978: INFO: Pod "pod-subpath-test-configmap-mz2s": Phase="Running", Reason="", readiness=true. Elapsed: 6.047942088s May 14 10:54:44.982: INFO: Pod "pod-subpath-test-configmap-mz2s": Phase="Running", Reason="", readiness=true. Elapsed: 8.05146005s May 14 10:54:46.985: INFO: Pod "pod-subpath-test-configmap-mz2s": Phase="Running", Reason="", readiness=true. Elapsed: 10.054888747s May 14 10:54:48.990: INFO: Pod "pod-subpath-test-configmap-mz2s": Phase="Running", Reason="", readiness=true. Elapsed: 12.059083906s May 14 10:54:50.994: INFO: Pod "pod-subpath-test-configmap-mz2s": Phase="Running", Reason="", readiness=true. Elapsed: 14.063771959s May 14 10:54:52.999: INFO: Pod "pod-subpath-test-configmap-mz2s": Phase="Running", Reason="", readiness=true. Elapsed: 16.068134199s May 14 10:54:55.002: INFO: Pod "pod-subpath-test-configmap-mz2s": Phase="Running", Reason="", readiness=true. Elapsed: 18.071549461s May 14 10:54:57.005: INFO: Pod "pod-subpath-test-configmap-mz2s": Phase="Running", Reason="", readiness=true. Elapsed: 20.074917883s May 14 10:54:59.014: INFO: Pod "pod-subpath-test-configmap-mz2s": Phase="Running", Reason="", readiness=true. Elapsed: 22.083771619s May 14 10:55:01.104: INFO: Pod "pod-subpath-test-configmap-mz2s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.173036599s STEP: Saw pod success May 14 10:55:01.104: INFO: Pod "pod-subpath-test-configmap-mz2s" satisfied condition "Succeeded or Failed" May 14 10:55:01.106: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-configmap-mz2s container test-container-subpath-configmap-mz2s: STEP: delete the pod May 14 10:55:01.226: INFO: Waiting for pod pod-subpath-test-configmap-mz2s to disappear May 14 10:55:01.260: INFO: Pod pod-subpath-test-configmap-mz2s no longer exists STEP: Deleting pod pod-subpath-test-configmap-mz2s May 14 10:55:01.260: INFO: Deleting pod "pod-subpath-test-configmap-mz2s" in namespace "subpath-7910" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 10:55:01.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7910" for this suite. • [SLOW TEST:25.040 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":9,"skipped":109,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 10:55:01.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-6202 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating stateful set ss in namespace statefulset-6202 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6202 May 14 10:55:01.524: INFO: Found 0 stateful pods, waiting for 1 May 14 10:55:11.528: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 14 10:55:11.531: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6202 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 14 10:55:11.789: INFO: stderr: "I0514 10:55:11.656986 229 log.go:172] (0xc000935290) (0xc0009a66e0) Create stream\nI0514 10:55:11.657030 229 log.go:172] (0xc000935290) (0xc0009a66e0) Stream added, broadcasting: 1\nI0514 10:55:11.662323 229 log.go:172] (0xc000935290) Reply frame received for 1\nI0514 10:55:11.662354 229 log.go:172] (0xc000935290) (0xc000677720) Create stream\nI0514 10:55:11.662365 229 log.go:172] (0xc000935290) (0xc000677720) Stream added, broadcasting: 3\nI0514 10:55:11.663294 229 log.go:172] (0xc000935290) Reply frame received for 3\nI0514 10:55:11.663340 229 log.go:172] (0xc000935290) (0xc000456b40) Create stream\nI0514 10:55:11.663361 229 log.go:172] (0xc000935290) (0xc000456b40) Stream added, broadcasting: 5\nI0514 10:55:11.664308 229 log.go:172] (0xc000935290) Reply frame received for 5\nI0514 10:55:11.754162 229 log.go:172] (0xc000935290) Data frame received for 5\nI0514 10:55:11.754189 229 log.go:172] (0xc000456b40) (5) Data frame handling\nI0514 10:55:11.754206 229 log.go:172] (0xc000456b40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0514 10:55:11.783257 229 log.go:172] (0xc000935290) Data frame received for 3\nI0514 10:55:11.783296 229 log.go:172] (0xc000677720) (3) Data frame handling\nI0514 10:55:11.783319 229 log.go:172] (0xc000677720) (3) Data frame sent\nI0514 10:55:11.783334 229 log.go:172] (0xc000935290) Data frame received for 3\nI0514 10:55:11.783347 229 log.go:172] (0xc000677720) (3) Data frame handling\nI0514 10:55:11.783359 229 log.go:172] (0xc000935290) Data frame received for 5\nI0514 10:55:11.783367 229 log.go:172] (0xc000456b40) (5) Data frame handling\nI0514 10:55:11.784665 229 log.go:172] (0xc000935290) Data frame received for 1\nI0514 10:55:11.784677 229 log.go:172] (0xc0009a66e0) (1) Data frame handling\nI0514 10:55:11.784687 229 log.go:172] (0xc0009a66e0) (1) Data frame sent\nI0514 10:55:11.784694 229 log.go:172] (0xc000935290) (0xc0009a66e0) Stream removed, broadcasting: 1\nI0514 10:55:11.784829 229 log.go:172] (0xc000935290) Go away received\nI0514 10:55:11.784865 229 log.go:172] (0xc000935290) (0xc0009a66e0) Stream removed, broadcasting: 1\nI0514 10:55:11.784872 229 log.go:172] (0xc000935290) (0xc000677720) Stream removed, broadcasting: 3\nI0514 10:55:11.784877 229 log.go:172] (0xc000935290) (0xc000456b40) Stream removed, broadcasting: 5\n" May 14 10:55:11.789: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 14 10:55:11.789: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 14 10:55:11.792: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 14 10:55:21.796: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 14 10:55:21.796: INFO: Waiting for statefulset status.replicas updated to 0 May 14 10:55:21.818: INFO: POD NODE PHASE GRACE CONDITIONS May 14 10:55:21.818: INFO: ss-0 kali-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:01 +0000 UTC }] May 14 10:55:21.818: INFO: May 14 10:55:21.818: INFO: StatefulSet ss has not reached scale 3, at 1 May 14 10:55:22.823: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.988743912s May 14 10:55:23.836: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.984028694s May 14 10:55:24.907: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.971043998s May 14 10:55:25.998: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.899384568s May 14 10:55:27.003: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.808603366s May 14 10:55:28.008: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.803573772s May 14 10:55:29.052: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.798607992s May 14 10:55:30.058: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.754880602s May 14 10:55:31.063: INFO: Verifying statefulset ss doesn't scale past 3 for another 748.883402ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6202 May 14 10:55:32.067: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6202 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 10:55:32.289: INFO: stderr: "I0514 10:55:32.197775 248 log.go:172] (0xc00003a2c0) (0xc0005f1720) Create stream\nI0514 10:55:32.197995 248 log.go:172] (0xc00003a2c0) (0xc0005f1720) Stream added, broadcasting: 1\nI0514 10:55:32.199728 248 log.go:172] (0xc00003a2c0) Reply frame received for 1\nI0514 10:55:32.199790 248 log.go:172] (0xc00003a2c0) (0xc00052ab40) Create stream\nI0514 10:55:32.199806 248 log.go:172] (0xc00003a2c0) (0xc00052ab40) Stream added, broadcasting: 3\nI0514 10:55:32.200640 248 log.go:172] (0xc00003a2c0) Reply frame received for 3\nI0514 10:55:32.200669 248 log.go:172] (0xc00003a2c0) (0xc000a7a000) Create stream\nI0514 10:55:32.200679 248 log.go:172] (0xc00003a2c0) (0xc000a7a000) Stream added, broadcasting: 5\nI0514 10:55:32.201807 248 log.go:172] (0xc00003a2c0) Reply frame received for 5\nI0514 10:55:32.282394 248 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0514 10:55:32.282438 248 log.go:172] (0xc00052ab40) (3) Data frame handling\nI0514 10:55:32.282456 248 log.go:172] (0xc00052ab40) (3) Data frame sent\nI0514 10:55:32.282466 248 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0514 10:55:32.282473 248 log.go:172] (0xc00052ab40) (3) Data frame handling\nI0514 10:55:32.282515 248 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0514 10:55:32.282545 248 log.go:172] (0xc000a7a000) (5) Data frame handling\nI0514 10:55:32.282561 248 log.go:172] (0xc000a7a000) (5) Data frame sent\nI0514 10:55:32.282579 248 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0514 10:55:32.282585 248 log.go:172] (0xc000a7a000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0514 10:55:32.283869 248 log.go:172] (0xc00003a2c0) Data frame received for 1\nI0514 10:55:32.283905 248 log.go:172] (0xc0005f1720) (1) Data frame handling\nI0514 10:55:32.283921 248 log.go:172] (0xc0005f1720) (1) Data frame sent\nI0514 10:55:32.283934 248 log.go:172] (0xc00003a2c0) (0xc0005f1720) Stream removed, broadcasting: 1\nI0514 10:55:32.283946 248 log.go:172] (0xc00003a2c0) Go away received\nI0514 10:55:32.284320 248 log.go:172] (0xc00003a2c0) (0xc0005f1720) Stream removed, broadcasting: 1\nI0514 10:55:32.284344 248 log.go:172] (0xc00003a2c0) (0xc00052ab40) Stream removed, broadcasting: 3\nI0514 10:55:32.284361 248 log.go:172] (0xc00003a2c0) (0xc000a7a000) Stream removed, broadcasting: 5\n" May 14 10:55:32.289: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 14 10:55:32.289: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 14 10:55:32.289: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6202 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 10:55:32.500: INFO: stderr: "I0514 10:55:32.425676 269 log.go:172] (0xc00003a580) (0xc000a42000) Create stream\nI0514 10:55:32.425758 269 log.go:172] (0xc00003a580) (0xc000a42000) Stream added, broadcasting: 1\nI0514 10:55:32.429371 269 log.go:172] (0xc00003a580) Reply frame received for 1\nI0514 10:55:32.429418 269 log.go:172] (0xc00003a580) (0xc000994000) Create stream\nI0514 10:55:32.429430 269 log.go:172] (0xc00003a580) (0xc000994000) Stream added, broadcasting: 3\nI0514 10:55:32.430357 269 log.go:172] (0xc00003a580) Reply frame received for 3\nI0514 10:55:32.430393 269 log.go:172] (0xc00003a580) (0xc000a420a0) Create stream\nI0514 10:55:32.430404 269 log.go:172] (0xc00003a580) (0xc000a420a0) Stream added, broadcasting: 5\nI0514 10:55:32.431378 269 log.go:172] (0xc00003a580) Reply frame received for 5\nI0514 10:55:32.494586 269 log.go:172] (0xc00003a580) Data frame received for 5\nI0514 10:55:32.494656 269 log.go:172] (0xc000a420a0) (5) Data frame handling\nI0514 10:55:32.494683 269 log.go:172] (0xc000a420a0) (5) Data frame sent\nI0514 10:55:32.494700 269 log.go:172] (0xc00003a580) Data frame received for 5\nI0514 10:55:32.494715 269 log.go:172] (0xc000a420a0) (5) Data frame handling\nI0514 10:55:32.494768 269 log.go:172] (0xc00003a580) Data frame received for 3\nI0514 10:55:32.494864 269 log.go:172] (0xc000994000) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0514 10:55:32.494893 269 log.go:172] (0xc000994000) (3) Data frame sent\nI0514 10:55:32.494912 269 log.go:172] (0xc00003a580) Data frame received for 3\nI0514 10:55:32.494929 269 log.go:172] (0xc000994000) (3) Data frame handling\nI0514 10:55:32.496080 269 log.go:172] (0xc00003a580) Data frame received for 1\nI0514 10:55:32.496104 269 log.go:172] (0xc000a42000) (1) Data frame handling\nI0514 10:55:32.496119 269 log.go:172] (0xc000a42000) (1) Data frame sent\nI0514 10:55:32.496134 269 log.go:172] (0xc00003a580) (0xc000a42000) Stream removed, broadcasting: 1\nI0514 10:55:32.496172 269 log.go:172] (0xc00003a580) Go away received\nI0514 10:55:32.496464 269 log.go:172] (0xc00003a580) (0xc000a42000) Stream removed, broadcasting: 1\nI0514 10:55:32.496483 269 log.go:172] (0xc00003a580) (0xc000994000) Stream removed, broadcasting: 3\nI0514 10:55:32.496490 269 log.go:172] (0xc00003a580) (0xc000a420a0) Stream removed, broadcasting: 5\n" May 14 10:55:32.500: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 14 10:55:32.500: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 14 10:55:32.500: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6202 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 10:55:32.698: INFO: stderr: "I0514 10:55:32.624604 289 log.go:172] (0xc0008dcbb0) (0xc000a34640) Create stream\nI0514 10:55:32.624671 289 log.go:172] (0xc0008dcbb0) (0xc000a34640) Stream added, broadcasting: 1\nI0514 10:55:32.628917 289 log.go:172] (0xc0008dcbb0) Reply frame received for 1\nI0514 10:55:32.628956 289 log.go:172] (0xc0008dcbb0) (0xc000629680) Create stream\nI0514 10:55:32.628967 289 log.go:172] (0xc0008dcbb0) (0xc000629680) Stream added, broadcasting: 3\nI0514 10:55:32.630291 289 log.go:172] (0xc0008dcbb0) Reply frame received for 3\nI0514 10:55:32.630321 289 log.go:172] (0xc0008dcbb0) (0xc0004f4aa0) Create stream\nI0514 10:55:32.630330 289 log.go:172] (0xc0008dcbb0) (0xc0004f4aa0) Stream added, broadcasting: 5\nI0514 10:55:32.631424 289 log.go:172] (0xc0008dcbb0) Reply frame received for 5\nI0514 10:55:32.691631 289 log.go:172] (0xc0008dcbb0) Data frame received for 3\nI0514 10:55:32.691664 289 log.go:172] (0xc000629680) (3) Data frame handling\nI0514 10:55:32.691677 289 log.go:172] (0xc000629680) (3) Data frame sent\nI0514 10:55:32.691712 289 log.go:172] (0xc0008dcbb0) Data frame received for 5\nI0514 10:55:32.691719 289 log.go:172] (0xc0004f4aa0) (5) Data frame handling\nI0514 10:55:32.691724 289 log.go:172] (0xc0004f4aa0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0514 10:55:32.691821 289 log.go:172] (0xc0008dcbb0) Data frame received for 3\nI0514 10:55:32.691839 289 log.go:172] (0xc000629680) (3) Data frame handling\nI0514 10:55:32.691873 289 log.go:172] (0xc0008dcbb0) Data frame received for 5\nI0514 10:55:32.691910 289 log.go:172] (0xc0004f4aa0) (5) Data frame handling\nI0514 10:55:32.693324 289 log.go:172] (0xc0008dcbb0) Data frame received for 1\nI0514 10:55:32.693374 289 log.go:172] (0xc000a34640) (1) Data frame handling\nI0514 10:55:32.693444 289 log.go:172] (0xc000a34640) (1) Data frame sent\nI0514 10:55:32.693541 289 log.go:172] (0xc0008dcbb0) (0xc000a34640) Stream removed, broadcasting: 1\nI0514 10:55:32.693572 289 log.go:172] (0xc0008dcbb0) Go away received\nI0514 10:55:32.694065 289 log.go:172] (0xc0008dcbb0) (0xc000a34640) Stream removed, broadcasting: 1\nI0514 10:55:32.694098 289 log.go:172] (0xc0008dcbb0) (0xc000629680) Stream removed, broadcasting: 3\nI0514 10:55:32.694114 289 log.go:172] (0xc0008dcbb0) (0xc0004f4aa0) Stream removed, broadcasting: 5\n" May 14 10:55:32.698: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 14 10:55:32.698: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 14 10:55:32.710: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 14 10:55:32.710: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 14 10:55:32.710: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 14 10:55:32.714: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6202 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 14 10:55:32.932: INFO: stderr: "I0514 10:55:32.843118 312 log.go:172] (0xc00003a0b0) (0xc00090aa00) Create stream\nI0514 10:55:32.843177 312 log.go:172] (0xc00003a0b0) (0xc00090aa00) Stream added, broadcasting: 1\nI0514 10:55:32.845590 312 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0514 10:55:32.845632 312 log.go:172] (0xc00003a0b0) (0xc00090ab40) Create stream\nI0514 10:55:32.845646 312 log.go:172] (0xc00003a0b0) (0xc00090ab40) Stream added, broadcasting: 3\nI0514 10:55:32.846556 312 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0514 10:55:32.846593 312 log.go:172] (0xc00003a0b0) (0xc000150140) Create stream\nI0514 10:55:32.846613 312 log.go:172] (0xc00003a0b0) (0xc000150140) Stream added, broadcasting: 5\nI0514 10:55:32.847484 312 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0514 10:55:32.925871 312 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0514 10:55:32.925922 312 log.go:172] (0xc000150140) (5) Data frame handling\nI0514 10:55:32.925946 312 log.go:172] (0xc000150140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0514 10:55:32.925966 312 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0514 10:55:32.925984 312 log.go:172] (0xc00090ab40) (3) Data frame handling\nI0514 10:55:32.926003 312 log.go:172] (0xc00090ab40) (3) Data frame sent\nI0514 10:55:32.926018 312 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0514 10:55:32.926028 312 log.go:172] (0xc00090ab40) (3) Data frame handling\nI0514 10:55:32.926073 312 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0514 10:55:32.926145 312 log.go:172] (0xc000150140) (5) Data frame handling\nI0514 10:55:32.927498 312 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0514 10:55:32.927550 312 log.go:172] (0xc00090aa00) (1) Data frame handling\nI0514 10:55:32.927571 312 log.go:172] (0xc00090aa00) (1) Data frame sent\nI0514 10:55:32.927753 312 log.go:172] (0xc00003a0b0) (0xc00090aa00) Stream removed, broadcasting: 1\nI0514 10:55:32.928038 312 log.go:172] (0xc00003a0b0) Go away received\nI0514 10:55:32.928118 312 log.go:172] (0xc00003a0b0) (0xc00090aa00) Stream removed, broadcasting: 1\nI0514 10:55:32.928139 312 log.go:172] (0xc00003a0b0) (0xc00090ab40) Stream removed, broadcasting: 3\nI0514 10:55:32.928158 312 log.go:172] (0xc00003a0b0) (0xc000150140) Stream removed, broadcasting: 5\n" May 14 10:55:32.932: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 14 10:55:32.932: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 14 10:55:32.932: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6202 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 14 10:55:33.214: INFO: stderr: "I0514 10:55:33.084212 331 log.go:172] (0xc000c45340) (0xc000be08c0) Create stream\nI0514 10:55:33.084293 331 log.go:172] (0xc000c45340) (0xc000be08c0) Stream added, broadcasting: 1\nI0514 10:55:33.087348 331 log.go:172] (0xc000c45340) Reply frame received for 1\nI0514 10:55:33.087395 331 log.go:172] (0xc000c45340) (0xc0005cab40) Create stream\nI0514 10:55:33.087416 331 log.go:172] (0xc000c45340) (0xc0005cab40) Stream added, broadcasting: 3\nI0514 10:55:33.088360 331 log.go:172] (0xc000c45340) Reply frame received for 3\nI0514 10:55:33.088413 331 log.go:172] (0xc000c45340) (0xc000be0960) Create stream\nI0514 10:55:33.088440 331 log.go:172] (0xc000c45340) (0xc000be0960) Stream added, broadcasting: 5\nI0514 10:55:33.090703 331 log.go:172] (0xc000c45340) Reply frame received for 5\nI0514 10:55:33.162278 331 log.go:172] (0xc000c45340) Data frame received for 5\nI0514 10:55:33.162311 331 log.go:172] (0xc000be0960) (5) Data frame handling\nI0514 10:55:33.162329 331 log.go:172] (0xc000be0960) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0514 10:55:33.206232 331 log.go:172] (0xc000c45340) Data frame received for 5\nI0514 10:55:33.206299 331 log.go:172] (0xc000be0960) (5) Data frame handling\nI0514 10:55:33.206332 331 log.go:172] (0xc000c45340) Data frame received for 3\nI0514 10:55:33.206356 331 log.go:172] (0xc0005cab40) (3) Data frame handling\nI0514 10:55:33.206386 331 log.go:172] (0xc0005cab40) (3) Data frame sent\nI0514 10:55:33.206411 331 log.go:172] (0xc000c45340) Data frame received for 3\nI0514 10:55:33.206425 331 log.go:172] (0xc0005cab40) (3) Data frame handling\nI0514 10:55:33.208029 331 log.go:172] (0xc000c45340) Data frame received for 1\nI0514 10:55:33.208068 331 log.go:172] (0xc000be08c0) (1) Data frame handling\nI0514 10:55:33.208090 331 log.go:172] (0xc000be08c0) (1) Data frame sent\nI0514 10:55:33.208114 331 log.go:172] (0xc000c45340) (0xc000be08c0) Stream removed, broadcasting: 1\nI0514 10:55:33.208143 331 log.go:172] (0xc000c45340) Go away received\nI0514 10:55:33.208726 331 log.go:172] (0xc000c45340) (0xc000be08c0) Stream removed, broadcasting: 1\nI0514 10:55:33.208759 331 log.go:172] (0xc000c45340) (0xc0005cab40) Stream removed, broadcasting: 3\nI0514 10:55:33.208778 331 log.go:172] (0xc000c45340) (0xc000be0960) Stream removed, broadcasting: 5\n" May 14 10:55:33.214: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 14 10:55:33.214: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 14 10:55:33.215: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6202 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 14 10:55:33.447: INFO: stderr: "I0514 10:55:33.348108 351 log.go:172] (0xc0009a88f0) (0xc0008135e0) Create stream\nI0514 10:55:33.348196 351 log.go:172] (0xc0009a88f0) (0xc0008135e0) Stream added, broadcasting: 1\nI0514 10:55:33.351642 351 log.go:172] (0xc0009a88f0) Reply frame received for 1\nI0514 10:55:33.351698 351 log.go:172] (0xc0009a88f0) (0xc00066d5e0) Create stream\nI0514 10:55:33.351715 351 log.go:172] (0xc0009a88f0) (0xc00066d5e0) Stream added, broadcasting: 3\nI0514 10:55:33.352684 351 log.go:172] (0xc0009a88f0) Reply frame received for 3\nI0514 10:55:33.352729 351 log.go:172] (0xc0009a88f0) (0xc00053ea00) Create stream\nI0514 10:55:33.352745 351 log.go:172] (0xc0009a88f0) (0xc00053ea00) Stream added, broadcasting: 5\nI0514 10:55:33.353694 351 log.go:172] (0xc0009a88f0) Reply frame received for 5\nI0514 10:55:33.413674 351 log.go:172] (0xc0009a88f0) Data frame received for 5\nI0514 10:55:33.413702 351 log.go:172] (0xc00053ea00) (5) Data frame handling\nI0514 10:55:33.413715 351 log.go:172] (0xc00053ea00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0514 10:55:33.439303 351 log.go:172] (0xc0009a88f0) Data frame received for 3\nI0514 10:55:33.439441 351 log.go:172] (0xc00066d5e0) (3) Data frame handling\nI0514 10:55:33.439491 351 log.go:172] (0xc00066d5e0) (3) Data frame sent\nI0514 10:55:33.439609 351 log.go:172] (0xc0009a88f0) Data frame received for 3\nI0514 10:55:33.439641 351 log.go:172] (0xc00066d5e0) (3) Data frame handling\nI0514 10:55:33.439690 351 log.go:172] (0xc0009a88f0) Data frame received for 5\nI0514 10:55:33.439725 351 log.go:172] (0xc00053ea00) (5) Data frame handling\nI0514 10:55:33.441660 351 log.go:172] (0xc0009a88f0) Data frame received for 1\nI0514 10:55:33.441674 351 log.go:172] (0xc0008135e0) (1) Data frame handling\nI0514 10:55:33.441685 351 log.go:172] (0xc0008135e0) (1) Data frame sent\nI0514 10:55:33.441827 351 log.go:172] (0xc0009a88f0) (0xc0008135e0) Stream removed, broadcasting: 1\nI0514 10:55:33.441843 351 log.go:172] (0xc0009a88f0) Go away received\nI0514 10:55:33.442381 351 log.go:172] (0xc0009a88f0) (0xc0008135e0) Stream removed, broadcasting: 1\nI0514 10:55:33.442415 351 log.go:172] (0xc0009a88f0) (0xc00066d5e0) Stream removed, broadcasting: 3\nI0514 10:55:33.442427 351 log.go:172] (0xc0009a88f0) (0xc00053ea00) Stream removed, broadcasting: 5\n" May 14 10:55:33.447: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 14 10:55:33.447: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 14 10:55:33.447: INFO: Waiting for statefulset status.replicas updated to 0 May 14 10:55:33.451: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 14 10:55:43.459: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 14 10:55:43.459: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 14 10:55:43.459: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 14 10:55:43.519: INFO: POD NODE PHASE GRACE CONDITIONS May 14 10:55:43.519: INFO: ss-0 kali-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:01 +0000 UTC }] May 14 10:55:43.519: INFO: ss-1 kali-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC }] May 14 10:55:43.519: INFO: ss-2 kali-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC }] May 14 10:55:43.519: INFO: May 14 10:55:43.519: INFO: StatefulSet ss has not reached scale 0, at 3 May 14 10:55:44.627: INFO: POD NODE PHASE GRACE CONDITIONS May 14 10:55:44.627: INFO: ss-0 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:01 +0000 UTC }] May 14 10:55:44.627: INFO: ss-1 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC }] May 14 10:55:44.627: INFO: ss-2 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC }] May 14 10:55:44.627: INFO: May 14 10:55:44.627: INFO: StatefulSet ss has not reached scale 0, at 3 May 14 10:55:45.662: INFO: POD NODE PHASE GRACE CONDITIONS May 14 10:55:45.662: INFO: ss-0 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:01 +0000 UTC }] May 14 10:55:45.662: INFO: ss-1 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC }] May 14 10:55:45.662: INFO: ss-2 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC }] May 14 10:55:45.662: INFO: May 14 10:55:45.662: INFO: StatefulSet ss has not reached scale 0, at 3 May 14 10:55:46.885: INFO: POD NODE PHASE GRACE CONDITIONS May 14 10:55:46.885: INFO: ss-0 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:01 +0000 UTC }] May 14 10:55:46.885: INFO: ss-1 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC }] May 14 10:55:46.885: INFO: ss-2 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC }] May 14 10:55:46.885: INFO: May 14 10:55:46.886: INFO: StatefulSet ss has not reached scale 0, at 3 May 14 10:55:47.932: INFO: POD NODE PHASE GRACE CONDITIONS May 14 10:55:47.932: INFO: ss-0 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:01 +0000 UTC }] May 14 10:55:47.932: INFO: ss-1 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC }] May 14 10:55:47.932: INFO: ss-2 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC }] May 14 10:55:47.932: INFO: May 14 10:55:47.932: INFO: StatefulSet ss has not reached scale 0, at 3 May 14 10:55:48.937: INFO: POD NODE PHASE GRACE CONDITIONS May 14 10:55:48.937: INFO: ss-0 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:01 +0000 UTC }] May 14 10:55:48.937: INFO: ss-1 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC }] May 14 10:55:48.937: INFO: ss-2 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC }] May 14 10:55:48.937: INFO: May 14 10:55:48.938: INFO: StatefulSet ss has not reached scale 0, at 3 May 14 10:55:49.942: INFO: POD NODE PHASE GRACE CONDITIONS May 14 10:55:49.942: INFO: ss-0 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:01 +0000 UTC }] May 14 10:55:49.942: INFO: ss-1 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC }] May 14 10:55:49.942: INFO: ss-2 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC }] May 14 10:55:49.942: INFO: May 14 10:55:49.942: INFO: StatefulSet ss has not reached scale 0, at 3 May 14 10:55:50.946: INFO: POD NODE PHASE GRACE CONDITIONS May 14 10:55:50.947: INFO: ss-0 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:01 +0000 UTC }] May 14 10:55:50.947: INFO: ss-1 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC }] May 14 10:55:50.947: INFO: ss-2 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC }] May 14 10:55:50.947: INFO: May 14 10:55:50.947: INFO: StatefulSet ss has not reached scale 0, at 3 May 14 10:55:51.998: INFO: POD NODE PHASE GRACE CONDITIONS May 14 10:55:51.998: INFO: ss-0 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:01 +0000 UTC }] May 14 10:55:51.998: INFO: ss-1 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC }] May 14 10:55:51.998: INFO: ss-2 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC }] May 14 10:55:51.998: INFO: May 14 10:55:51.998: INFO: StatefulSet ss has not reached scale 0, at 3 May 14 10:55:53.003: INFO: POD NODE PHASE GRACE CONDITIONS May 14 10:55:53.003: INFO: ss-0 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:01 +0000 UTC }] May 14 10:55:53.003: INFO: ss-1 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC }] May 14 10:55:53.003: INFO: ss-2 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 10:55:21 +0000 UTC }] May 14 10:55:53.003: INFO: May 14 10:55:53.003: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6202 May 14 10:55:54.008: INFO: Scaling statefulset ss to 0 May 14 10:55:54.019: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 May 14 10:55:54.022: INFO: Deleting all statefulset in ns statefulset-6202 May 14 10:55:54.025: INFO: Scaling statefulset ss to 0 May 14 10:55:54.034: INFO: Waiting for statefulset status.replicas updated to 0 May 14 10:55:54.036: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 10:55:54.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6202" for this suite. • [SLOW TEST:52.814 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":10,"skipped":116,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 10:55:54.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-90cfd008-f766-4899-8f3f-85740f524dd5 STEP: Creating a pod to test consume secrets May 14 10:55:54.192: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-27ae6fff-bf7d-4061-8111-c958f608aab3" in namespace "projected-2521" to be "Succeeded or Failed" May 14 10:55:54.208: INFO: Pod "pod-projected-secrets-27ae6fff-bf7d-4061-8111-c958f608aab3": Phase="Pending", Reason="", readiness=false. Elapsed: 16.22115ms May 14 10:55:56.279: INFO: Pod "pod-projected-secrets-27ae6fff-bf7d-4061-8111-c958f608aab3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086988897s May 14 10:55:58.283: INFO: Pod "pod-projected-secrets-27ae6fff-bf7d-4061-8111-c958f608aab3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091176505s STEP: Saw pod success May 14 10:55:58.283: INFO: Pod "pod-projected-secrets-27ae6fff-bf7d-4061-8111-c958f608aab3" satisfied condition "Succeeded or Failed" May 14 10:55:58.286: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-27ae6fff-bf7d-4061-8111-c958f608aab3 container projected-secret-volume-test: STEP: delete the pod May 14 10:55:58.370: INFO: Waiting for pod pod-projected-secrets-27ae6fff-bf7d-4061-8111-c958f608aab3 to disappear May 14 10:55:58.452: INFO: Pod pod-projected-secrets-27ae6fff-bf7d-4061-8111-c958f608aab3 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 10:55:58.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2521" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":11,"skipped":120,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 10:55:58.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 10:56:04.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9875" for this suite. • [SLOW TEST:6.610 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":12,"skipped":142,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 10:56:05.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test env composition May 14 10:56:05.252: INFO: Waiting up to 5m0s for pod "var-expansion-effbdee8-c49c-4eed-9d63-bb61de290fb5" in namespace "var-expansion-5003" to be "Succeeded or Failed" May 14 10:56:05.304: INFO: Pod "var-expansion-effbdee8-c49c-4eed-9d63-bb61de290fb5": Phase="Pending", Reason="", readiness=false. Elapsed: 51.613066ms May 14 10:56:07.328: INFO: Pod "var-expansion-effbdee8-c49c-4eed-9d63-bb61de290fb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076125059s May 14 10:56:09.381: INFO: Pod "var-expansion-effbdee8-c49c-4eed-9d63-bb61de290fb5": Phase="Running", Reason="", readiness=true. Elapsed: 4.129038124s May 14 10:56:11.386: INFO: Pod "var-expansion-effbdee8-c49c-4eed-9d63-bb61de290fb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.133513162s STEP: Saw pod success May 14 10:56:11.386: INFO: Pod "var-expansion-effbdee8-c49c-4eed-9d63-bb61de290fb5" satisfied condition "Succeeded or Failed" May 14 10:56:11.390: INFO: Trying to get logs from node kali-worker2 pod var-expansion-effbdee8-c49c-4eed-9d63-bb61de290fb5 container dapi-container: STEP: delete the pod May 14 10:56:11.466: INFO: Waiting for pod var-expansion-effbdee8-c49c-4eed-9d63-bb61de290fb5 to disappear May 14 10:56:11.478: INFO: Pod var-expansion-effbdee8-c49c-4eed-9d63-bb61de290fb5 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 10:56:11.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5003" for this suite. • [SLOW TEST:6.415 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":13,"skipped":157,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 10:56:11.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 10:56:31.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8012" for this suite. • [SLOW TEST:20.065 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":14,"skipped":192,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 10:56:31.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller May 14 10:56:31.652: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7850' May 14 10:56:31.988: INFO: stderr: "" May 14 10:56:31.988: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 14 10:56:31.988: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7850' May 14 10:56:32.198: INFO: stderr: "" May 14 10:56:32.198: INFO: stdout: "update-demo-nautilus-nbrsv update-demo-nautilus-w2ljb " May 14 10:56:32.198: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nbrsv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7850' May 14 10:56:32.318: INFO: stderr: "" May 14 10:56:32.318: INFO: stdout: "" May 14 10:56:32.318: INFO: update-demo-nautilus-nbrsv is created but not running May 14 10:56:37.318: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7850' May 14 10:56:37.428: INFO: stderr: "" May 14 10:56:37.428: INFO: stdout: "update-demo-nautilus-nbrsv update-demo-nautilus-w2ljb " May 14 10:56:37.429: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nbrsv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7850' May 14 10:56:37.516: INFO: stderr: "" May 14 10:56:37.516: INFO: stdout: "true" May 14 10:56:37.516: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nbrsv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7850' May 14 10:56:37.616: INFO: stderr: "" May 14 10:56:37.616: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 14 10:56:37.616: INFO: validating pod update-demo-nautilus-nbrsv May 14 10:56:37.619: INFO: got data: { "image": "nautilus.jpg" } May 14 10:56:37.620: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 14 10:56:37.620: INFO: update-demo-nautilus-nbrsv is verified up and running May 14 10:56:37.620: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w2ljb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7850' May 14 10:56:37.729: INFO: stderr: "" May 14 10:56:37.729: INFO: stdout: "true" May 14 10:56:37.729: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w2ljb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7850' May 14 10:56:37.831: INFO: stderr: "" May 14 10:56:37.831: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 14 10:56:37.831: INFO: validating pod update-demo-nautilus-w2ljb May 14 10:56:37.849: INFO: got data: { "image": "nautilus.jpg" } May 14 10:56:37.849: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 14 10:56:37.849: INFO: update-demo-nautilus-w2ljb is verified up and running STEP: using delete to clean up resources May 14 10:56:37.849: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7850' May 14 10:56:37.950: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 14 10:56:37.950: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 14 10:56:37.950: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7850' May 14 10:56:38.055: INFO: stderr: "No resources found in kubectl-7850 namespace.\n" May 14 10:56:38.055: INFO: stdout: "" May 14 10:56:38.055: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7850 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 14 10:56:38.166: INFO: stderr: "" May 14 10:56:38.166: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 10:56:38.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7850" for this suite. • [SLOW TEST:6.622 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":275,"completed":15,"skipped":196,"failed":0} SS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 10:56:38.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 10:56:45.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6206" for this suite. STEP: Destroying namespace "nsdeletetest-7407" for this suite. May 14 10:56:45.569: INFO: Namespace nsdeletetest-7407 was already deleted STEP: Destroying namespace "nsdeletetest-8115" for this suite. • [SLOW TEST:7.400 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":16,"skipped":198,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 10:56:45.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-c671a749-249a-4aeb-92fe-abe118ac533a STEP: Creating a pod to test consume secrets May 14 10:56:45.757: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7e3c2d05-0d9f-40ad-86e7-81a2edf14315" in namespace "projected-4959" to be "Succeeded or Failed" May 14 10:56:45.775: INFO: Pod "pod-projected-secrets-7e3c2d05-0d9f-40ad-86e7-81a2edf14315": Phase="Pending", Reason="", readiness=false. Elapsed: 17.494398ms May 14 10:56:47.938: INFO: Pod "pod-projected-secrets-7e3c2d05-0d9f-40ad-86e7-81a2edf14315": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180651518s May 14 10:56:49.958: INFO: Pod "pod-projected-secrets-7e3c2d05-0d9f-40ad-86e7-81a2edf14315": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.200491789s STEP: Saw pod success May 14 10:56:49.958: INFO: Pod "pod-projected-secrets-7e3c2d05-0d9f-40ad-86e7-81a2edf14315" satisfied condition "Succeeded or Failed" May 14 10:56:49.961: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-7e3c2d05-0d9f-40ad-86e7-81a2edf14315 container projected-secret-volume-test: STEP: delete the pod May 14 10:56:50.000: INFO: Waiting for pod pod-projected-secrets-7e3c2d05-0d9f-40ad-86e7-81a2edf14315 to disappear May 14 10:56:50.088: INFO: Pod pod-projected-secrets-7e3c2d05-0d9f-40ad-86e7-81a2edf14315 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 10:56:50.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4959" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":17,"skipped":205,"failed":0} SS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 10:56:50.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 14 10:56:50.548: INFO: Create a RollingUpdate DaemonSet May 14 10:56:50.552: INFO: Check that daemon pods launch on every node of the cluster May 14 10:56:50.582: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 10:56:50.599: INFO: Number of nodes with available pods: 0 May 14 10:56:50.599: INFO: Node kali-worker is running more than one daemon pod May 14 10:56:51.605: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 10:56:51.608: INFO: Number of nodes with available pods: 0 May 14 10:56:51.608: INFO: Node kali-worker is running more than one daemon pod May 14 10:56:52.606: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 10:56:52.610: INFO: Number of nodes with available pods: 0 May 14 10:56:52.610: INFO: Node kali-worker is running more than one daemon pod May 14 10:56:53.604: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 10:56:53.607: INFO: Number of nodes with available pods: 0 May 14 10:56:53.607: INFO: Node kali-worker is running more than one daemon pod May 14 10:56:54.604: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 10:56:54.607: INFO: Number of nodes with available pods: 1 May 14 10:56:54.607: INFO: Node kali-worker is running more than one daemon pod May 14 10:56:55.605: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 10:56:55.608: INFO: Number of nodes with available pods: 2 May 14 10:56:55.608: INFO: Number of running nodes: 2, number of available pods: 2 May 14 10:56:55.608: INFO: Update the DaemonSet to trigger a rollout May 14 10:56:55.615: INFO: Updating DaemonSet daemon-set May 14 10:57:04.634: INFO: Roll back the DaemonSet before rollout is complete May 14 10:57:04.642: INFO: Updating DaemonSet daemon-set May 14 10:57:04.642: INFO: Make sure DaemonSet rollback is complete May 14 10:57:04.672: INFO: Wrong image for pod: daemon-set-mgtzk. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 14 10:57:04.672: INFO: Pod daemon-set-mgtzk is not available May 14 10:57:04.690: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 10:57:05.695: INFO: Wrong image for pod: daemon-set-mgtzk. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 14 10:57:05.695: INFO: Pod daemon-set-mgtzk is not available May 14 10:57:05.700: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 10:57:06.694: INFO: Wrong image for pod: daemon-set-mgtzk. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 14 10:57:06.694: INFO: Pod daemon-set-mgtzk is not available May 14 10:57:06.714: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 10:57:07.696: INFO: Wrong image for pod: daemon-set-mgtzk. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 14 10:57:07.696: INFO: Pod daemon-set-mgtzk is not available May 14 10:57:07.701: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 10:57:08.695: INFO: Wrong image for pod: daemon-set-mgtzk. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 14 10:57:08.695: INFO: Pod daemon-set-mgtzk is not available May 14 10:57:08.700: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 10:57:09.695: INFO: Wrong image for pod: daemon-set-mgtzk. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 14 10:57:09.695: INFO: Pod daemon-set-mgtzk is not available May 14 10:57:09.700: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 10:57:10.695: INFO: Wrong image for pod: daemon-set-mgtzk. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 14 10:57:10.695: INFO: Pod daemon-set-mgtzk is not available May 14 10:57:10.699: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 10:57:11.695: INFO: Wrong image for pod: daemon-set-mgtzk. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 14 10:57:11.695: INFO: Pod daemon-set-mgtzk is not available May 14 10:57:11.700: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 10:57:12.693: INFO: Wrong image for pod: daemon-set-mgtzk. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 14 10:57:12.693: INFO: Pod daemon-set-mgtzk is not available May 14 10:57:12.696: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 10:57:13.693: INFO: Pod daemon-set-rsw5c is not available May 14 10:57:13.697: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9117, will wait for the garbage collector to delete the pods May 14 10:57:14.152: INFO: Deleting DaemonSet.extensions daemon-set took: 5.30885ms May 14 10:57:14.552: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.240885ms May 14 10:57:19.056: INFO: Number of nodes with available pods: 0 May 14 10:57:19.056: INFO: Number of running nodes: 0, number of available pods: 0 May 14 10:57:19.062: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9117/daemonsets","resourceVersion":"4265581"},"items":null} May 14 10:57:19.065: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9117/pods","resourceVersion":"4265581"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 10:57:19.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9117" for this suite. • [SLOW TEST:29.003 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":18,"skipped":207,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 10:57:19.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 14 10:57:19.268: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5045 /api/v1/namespaces/watch-5045/configmaps/e2e-watch-test-label-changed cae7e52d-bc71-4400-a0e9-eb36420de5a4 4265587 0 2020-05-14 10:57:19 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-14 10:57:19 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 14 10:57:19.268: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5045 /api/v1/namespaces/watch-5045/configmaps/e2e-watch-test-label-changed cae7e52d-bc71-4400-a0e9-eb36420de5a4 4265588 0 2020-05-14 10:57:19 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-14 10:57:19 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 14 10:57:19.269: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5045 /api/v1/namespaces/watch-5045/configmaps/e2e-watch-test-label-changed cae7e52d-bc71-4400-a0e9-eb36420de5a4 4265589 0 2020-05-14 10:57:19 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-14 10:57:19 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 14 10:57:29.374: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5045 /api/v1/namespaces/watch-5045/configmaps/e2e-watch-test-label-changed cae7e52d-bc71-4400-a0e9-eb36420de5a4 4265645 0 2020-05-14 10:57:19 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-14 10:57:29 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 14 10:57:29.374: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5045 /api/v1/namespaces/watch-5045/configmaps/e2e-watch-test-label-changed cae7e52d-bc71-4400-a0e9-eb36420de5a4 4265646 0 2020-05-14 10:57:19 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-14 10:57:29 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} May 14 10:57:29.374: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5045 /api/v1/namespaces/watch-5045/configmaps/e2e-watch-test-label-changed cae7e52d-bc71-4400-a0e9-eb36420de5a4 4265647 0 2020-05-14 10:57:19 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-14 10:57:29 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 10:57:29.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5045" for this suite. • [SLOW TEST:10.281 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":19,"skipped":235,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 10:57:29.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-6516 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6516 STEP: creating replication controller externalsvc in namespace services-6516 I0514 10:57:29.679477 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-6516, replica count: 2 I0514 10:57:32.729950 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 10:57:35.730183 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 14 10:57:35.772: INFO: Creating new exec pod May 14 10:57:41.796: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-6516 execpodcgkdk -- /bin/sh -x -c nslookup clusterip-service' May 14 10:57:42.072: INFO: stderr: "I0514 10:57:41.909836 592 log.go:172] (0xc000714790) (0xc000657540) Create stream\nI0514 10:57:41.909877 592 log.go:172] (0xc000714790) (0xc000657540) Stream added, broadcasting: 1\nI0514 10:57:41.912637 592 log.go:172] (0xc000714790) Reply frame received for 1\nI0514 10:57:41.912671 592 log.go:172] (0xc000714790) (0xc0006f8000) Create stream\nI0514 10:57:41.912683 592 log.go:172] (0xc000714790) (0xc0006f8000) Stream added, broadcasting: 3\nI0514 10:57:41.913394 592 log.go:172] (0xc000714790) Reply frame received for 3\nI0514 10:57:41.913415 592 log.go:172] (0xc000714790) (0xc0006575e0) Create stream\nI0514 10:57:41.913421 592 log.go:172] (0xc000714790) (0xc0006575e0) Stream added, broadcasting: 5\nI0514 10:57:41.913975 592 log.go:172] (0xc000714790) Reply frame received for 5\nI0514 10:57:42.019550 592 log.go:172] (0xc000714790) Data frame received for 5\nI0514 10:57:42.019571 592 log.go:172] (0xc0006575e0) (5) Data frame handling\nI0514 10:57:42.019585 592 log.go:172] (0xc0006575e0) (5) Data frame sent\n+ nslookup clusterip-service\nI0514 10:57:42.063912 592 log.go:172] (0xc000714790) Data frame received for 3\nI0514 10:57:42.063945 592 log.go:172] (0xc0006f8000) (3) Data frame handling\nI0514 10:57:42.063973 592 log.go:172] (0xc0006f8000) (3) Data frame sent\nI0514 10:57:42.065018 592 log.go:172] (0xc000714790) Data frame received for 3\nI0514 10:57:42.065050 592 log.go:172] (0xc0006f8000) (3) Data frame handling\nI0514 10:57:42.065076 592 log.go:172] (0xc0006f8000) (3) Data frame sent\nI0514 10:57:42.065647 592 log.go:172] (0xc000714790) Data frame received for 5\nI0514 10:57:42.065666 592 log.go:172] (0xc0006575e0) (5) Data frame handling\nI0514 10:57:42.065713 592 log.go:172] (0xc000714790) Data frame received for 3\nI0514 10:57:42.065749 592 log.go:172] (0xc0006f8000) (3) Data frame handling\nI0514 10:57:42.067443 592 log.go:172] (0xc000714790) Data frame received for 1\nI0514 10:57:42.067475 592 log.go:172] (0xc000657540) (1) Data frame handling\nI0514 10:57:42.067503 592 log.go:172] (0xc000657540) (1) Data frame sent\nI0514 10:57:42.067525 592 log.go:172] (0xc000714790) (0xc000657540) Stream removed, broadcasting: 1\nI0514 10:57:42.067609 592 log.go:172] (0xc000714790) Go away received\nI0514 10:57:42.067978 592 log.go:172] (0xc000714790) (0xc000657540) Stream removed, broadcasting: 1\nI0514 10:57:42.068001 592 log.go:172] (0xc000714790) (0xc0006f8000) Stream removed, broadcasting: 3\nI0514 10:57:42.068018 592 log.go:172] (0xc000714790) (0xc0006575e0) Stream removed, broadcasting: 5\n" May 14 10:57:42.072: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-6516.svc.cluster.local\tcanonical name = externalsvc.services-6516.svc.cluster.local.\nName:\texternalsvc.services-6516.svc.cluster.local\nAddress: 10.104.252.37\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6516, will wait for the garbage collector to delete the pods May 14 10:57:42.205: INFO: Deleting ReplicationController externalsvc took: 5.033981ms May 14 10:57:42.505: INFO: Terminating ReplicationController externalsvc pods took: 300.193074ms May 14 10:57:53.850: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 10:57:53.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6516" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:24.500 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":20,"skipped":250,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 10:57:53.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 May 14 10:57:53.947: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 14 10:57:53.974: INFO: Waiting for terminating namespaces to be deleted... May 14 10:57:53.976: INFO: Logging pods the kubelet thinks is on node kali-worker before test May 14 10:57:53.993: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 14 10:57:53.993: INFO: Container kindnet-cni ready: true, restart count 1 May 14 10:57:53.993: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 14 10:57:53.993: INFO: Container kube-proxy ready: true, restart count 0 May 14 10:57:53.993: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test May 14 10:57:53.999: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 14 10:57:53.999: INFO: Container kube-proxy ready: true, restart count 0 May 14 10:57:53.999: INFO: execpodcgkdk from services-6516 started at 2020-05-14 10:57:35 +0000 UTC (1 container statuses recorded) May 14 10:57:53.999: INFO: Container agnhost-pause ready: true, restart count 0 May 14 10:57:53.999: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 14 10:57:53.999: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-ff373ec6-12f5-4808-b751-91b8ea18d90d 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-ff373ec6-12f5-4808-b751-91b8ea18d90d off the node kali-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-ff373ec6-12f5-4808-b751-91b8ea18d90d [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:03:04.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8673" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:310.352 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":21,"skipped":263,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:03:04.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6887.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6887.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6887.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6887.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6887.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6887.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6887.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6887.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6887.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6887.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 14 11:03:10.503: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:10.507: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:10.510: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:10.513: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:10.522: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:10.525: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:10.527: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:10.530: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:10.535: INFO: Lookups using dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6887.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6887.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local jessie_udp@dns-test-service-2.dns-6887.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6887.svc.cluster.local] May 14 11:03:15.539: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:15.543: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:15.546: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:15.549: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:15.558: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:15.560: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:15.563: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:15.566: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:15.571: INFO: Lookups using dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6887.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6887.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local jessie_udp@dns-test-service-2.dns-6887.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6887.svc.cluster.local] May 14 11:03:20.560: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:20.572: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:20.578: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:20.580: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:20.590: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:20.593: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:20.596: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:20.599: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:20.608: INFO: Lookups using dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6887.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6887.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local jessie_udp@dns-test-service-2.dns-6887.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6887.svc.cluster.local] May 14 11:03:25.539: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:25.542: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:25.545: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:25.548: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:25.556: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:25.559: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:25.561: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:25.563: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:25.567: INFO: Lookups using dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6887.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6887.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local jessie_udp@dns-test-service-2.dns-6887.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6887.svc.cluster.local] May 14 11:03:30.540: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:30.543: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:30.546: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:30.566: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:30.580: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:30.613: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:30.616: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:30.619: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:30.638: INFO: Lookups using dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6887.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6887.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local jessie_udp@dns-test-service-2.dns-6887.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6887.svc.cluster.local] May 14 11:03:35.541: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:35.546: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:35.549: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:35.552: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:35.559: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:35.562: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:35.565: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:35.568: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6887.svc.cluster.local from pod dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f: the server could not find the requested resource (get pods dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f) May 14 11:03:35.574: INFO: Lookups using dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6887.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6887.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6887.svc.cluster.local jessie_udp@dns-test-service-2.dns-6887.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6887.svc.cluster.local] May 14 11:03:40.605: INFO: DNS probes using dns-6887/dns-test-4a7f928b-6c6f-491f-b6bd-dd3edddfb41f succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:03:41.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6887" for this suite. • [SLOW TEST:37.266 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":22,"skipped":275,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:03:41.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 14 11:03:41.641: INFO: Waiting up to 5m0s for pod "downwardapi-volume-27a19882-01f1-4d3b-b225-2f76ace33af8" in namespace "downward-api-3431" to be "Succeeded or Failed" May 14 11:03:41.646: INFO: Pod "downwardapi-volume-27a19882-01f1-4d3b-b225-2f76ace33af8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102578ms May 14 11:03:43.960: INFO: Pod "downwardapi-volume-27a19882-01f1-4d3b-b225-2f76ace33af8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318864599s May 14 11:03:45.965: INFO: Pod "downwardapi-volume-27a19882-01f1-4d3b-b225-2f76ace33af8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.323166771s STEP: Saw pod success May 14 11:03:45.965: INFO: Pod "downwardapi-volume-27a19882-01f1-4d3b-b225-2f76ace33af8" satisfied condition "Succeeded or Failed" May 14 11:03:45.968: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-27a19882-01f1-4d3b-b225-2f76ace33af8 container client-container: STEP: delete the pod May 14 11:03:46.019: INFO: Waiting for pod downwardapi-volume-27a19882-01f1-4d3b-b225-2f76ace33af8 to disappear May 14 11:03:46.062: INFO: Pod downwardapi-volume-27a19882-01f1-4d3b-b225-2f76ace33af8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:03:46.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3431" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":23,"skipped":278,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:03:46.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 14 11:03:54.296: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 14 11:03:54.316: INFO: Pod pod-with-prestop-http-hook still exists May 14 11:03:56.316: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 14 11:03:56.320: INFO: Pod pod-with-prestop-http-hook still exists May 14 11:03:58.316: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 14 11:03:58.322: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:03:58.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9700" for this suite. • [SLOW TEST:12.253 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":24,"skipped":285,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:03:58.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-projected-all-test-volume-1e845ee4-3637-4cc4-9492-4258535c0cf9 STEP: Creating secret with name secret-projected-all-test-volume-80df48c5-bfed-454d-b02a-6b1316a9d061 STEP: Creating a pod to test Check all projections for projected volume plugin May 14 11:03:58.420: INFO: Waiting up to 5m0s for pod "projected-volume-3b628758-1b34-48d9-a2fe-26e13c3b2393" in namespace "projected-91" to be "Succeeded or Failed" May 14 11:03:58.429: INFO: Pod "projected-volume-3b628758-1b34-48d9-a2fe-26e13c3b2393": Phase="Pending", Reason="", readiness=false. Elapsed: 9.114227ms May 14 11:04:00.432: INFO: Pod "projected-volume-3b628758-1b34-48d9-a2fe-26e13c3b2393": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012638066s May 14 11:04:02.435: INFO: Pod "projected-volume-3b628758-1b34-48d9-a2fe-26e13c3b2393": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015389918s STEP: Saw pod success May 14 11:04:02.435: INFO: Pod "projected-volume-3b628758-1b34-48d9-a2fe-26e13c3b2393" satisfied condition "Succeeded or Failed" May 14 11:04:02.438: INFO: Trying to get logs from node kali-worker2 pod projected-volume-3b628758-1b34-48d9-a2fe-26e13c3b2393 container projected-all-volume-test: STEP: delete the pod May 14 11:04:02.488: INFO: Waiting for pod projected-volume-3b628758-1b34-48d9-a2fe-26e13c3b2393 to disappear May 14 11:04:02.503: INFO: Pod projected-volume-3b628758-1b34-48d9-a2fe-26e13c3b2393 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:04:02.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-91" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":25,"skipped":293,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:04:02.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-3ce53964-f61c-468f-9bbb-94df4862301c STEP: Creating a pod to test consume secrets May 14 11:04:02.624: INFO: Waiting up to 5m0s for pod "pod-secrets-c2aa8ff1-245a-4c6a-afdd-4a40380690c7" in namespace "secrets-5805" to be "Succeeded or Failed" May 14 11:04:02.641: INFO: Pod "pod-secrets-c2aa8ff1-245a-4c6a-afdd-4a40380690c7": Phase="Pending", Reason="", readiness=false. Elapsed: 17.147597ms May 14 11:04:04.644: INFO: Pod "pod-secrets-c2aa8ff1-245a-4c6a-afdd-4a40380690c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020099838s May 14 11:04:06.703: INFO: Pod "pod-secrets-c2aa8ff1-245a-4c6a-afdd-4a40380690c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07891837s May 14 11:04:08.708: INFO: Pod "pod-secrets-c2aa8ff1-245a-4c6a-afdd-4a40380690c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.08372446s STEP: Saw pod success May 14 11:04:08.708: INFO: Pod "pod-secrets-c2aa8ff1-245a-4c6a-afdd-4a40380690c7" satisfied condition "Succeeded or Failed" May 14 11:04:08.711: INFO: Trying to get logs from node kali-worker pod pod-secrets-c2aa8ff1-245a-4c6a-afdd-4a40380690c7 container secret-volume-test: STEP: delete the pod May 14 11:04:08.765: INFO: Waiting for pod pod-secrets-c2aa8ff1-245a-4c6a-afdd-4a40380690c7 to disappear May 14 11:04:08.776: INFO: Pod pod-secrets-c2aa8ff1-245a-4c6a-afdd-4a40380690c7 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:04:08.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5805" for this suite. • [SLOW TEST:6.270 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":26,"skipped":327,"failed":0} SS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:04:08.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 14 11:04:08.890: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 14 11:04:17.964: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:04:17.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7570" for this suite. • [SLOW TEST:9.193 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":27,"skipped":329,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:04:17.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs May 14 11:04:18.055: INFO: Waiting up to 5m0s for pod "pod-0f08b595-dc8d-460b-9bac-f74e9b1774b0" in namespace "emptydir-7070" to be "Succeeded or Failed" May 14 11:04:18.072: INFO: Pod "pod-0f08b595-dc8d-460b-9bac-f74e9b1774b0": Phase="Pending", Reason="", readiness=false. Elapsed: 17.31498ms May 14 11:04:20.123: INFO: Pod "pod-0f08b595-dc8d-460b-9bac-f74e9b1774b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067718681s May 14 11:04:22.127: INFO: Pod "pod-0f08b595-dc8d-460b-9bac-f74e9b1774b0": Phase="Running", Reason="", readiness=true. Elapsed: 4.072325113s May 14 11:04:24.132: INFO: Pod "pod-0f08b595-dc8d-460b-9bac-f74e9b1774b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.077247985s STEP: Saw pod success May 14 11:04:24.132: INFO: Pod "pod-0f08b595-dc8d-460b-9bac-f74e9b1774b0" satisfied condition "Succeeded or Failed" May 14 11:04:24.136: INFO: Trying to get logs from node kali-worker2 pod pod-0f08b595-dc8d-460b-9bac-f74e9b1774b0 container test-container: STEP: delete the pod May 14 11:04:24.216: INFO: Waiting for pod pod-0f08b595-dc8d-460b-9bac-f74e9b1774b0 to disappear May 14 11:04:24.246: INFO: Pod pod-0f08b595-dc8d-460b-9bac-f74e9b1774b0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:04:24.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7070" for this suite. • [SLOW TEST:6.279 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":28,"skipped":361,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:04:24.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8031.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8031.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8031.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8031.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8031.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8031.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 14 11:04:32.530: INFO: DNS probes using dns-8031/dns-test-b91fd888-2b03-457c-b118-a35de27702d0 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:04:32.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8031" for this suite. • [SLOW TEST:8.646 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":29,"skipped":373,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:04:32.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 May 14 11:04:33.597: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 14 11:04:33.609: INFO: Waiting for terminating namespaces to be deleted... May 14 11:04:33.611: INFO: Logging pods the kubelet thinks is on node kali-worker before test May 14 11:04:33.617: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 14 11:04:33.617: INFO: Container kindnet-cni ready: true, restart count 1 May 14 11:04:33.617: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 14 11:04:33.617: INFO: Container kube-proxy ready: true, restart count 0 May 14 11:04:33.617: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test May 14 11:04:33.622: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 14 11:04:33.622: INFO: Container kindnet-cni ready: true, restart count 0 May 14 11:04:33.622: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 14 11:04:33.622: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: verifying the node has the label node kali-worker STEP: verifying the node has the label node kali-worker2 May 14 11:04:33.839: INFO: Pod kindnet-f8plf requesting resource cpu=100m on Node kali-worker May 14 11:04:33.839: INFO: Pod kindnet-mcdh2 requesting resource cpu=100m on Node kali-worker2 May 14 11:04:33.839: INFO: Pod kube-proxy-mmnb6 requesting resource cpu=0m on Node kali-worker2 May 14 11:04:33.839: INFO: Pod kube-proxy-vrswj requesting resource cpu=0m on Node kali-worker STEP: Starting Pods to consume most of the cluster CPU. May 14 11:04:33.839: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker May 14 11:04:33.845: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-33771c1f-c076-4ca1-a793-871533560f32.160ee03da4c14eb7], Reason = [Scheduled], Message = [Successfully assigned sched-pred-930/filler-pod-33771c1f-c076-4ca1-a793-871533560f32 to kali-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-33771c1f-c076-4ca1-a793-871533560f32.160ee03e09de550d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-33771c1f-c076-4ca1-a793-871533560f32.160ee03e71de6b53], Reason = [Created], Message = [Created container filler-pod-33771c1f-c076-4ca1-a793-871533560f32] STEP: Considering event: Type = [Normal], Name = [filler-pod-33771c1f-c076-4ca1-a793-871533560f32.160ee03e886f3138], Reason = [Started], Message = [Started container filler-pod-33771c1f-c076-4ca1-a793-871533560f32] STEP: Considering event: Type = [Normal], Name = [filler-pod-d5cb1bd4-733b-49d4-8f70-afc47d673d1a.160ee03dad6036da], Reason = [Scheduled], Message = [Successfully assigned sched-pred-930/filler-pod-d5cb1bd4-733b-49d4-8f70-afc47d673d1a to kali-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-d5cb1bd4-733b-49d4-8f70-afc47d673d1a.160ee03e3266079d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-d5cb1bd4-733b-49d4-8f70-afc47d673d1a.160ee03e7c1c5d6f], Reason = [Created], Message = [Created container filler-pod-d5cb1bd4-733b-49d4-8f70-afc47d673d1a] STEP: Considering event: Type = [Normal], Name = [filler-pod-d5cb1bd4-733b-49d4-8f70-afc47d673d1a.160ee03e8a7785f9], Reason = [Started], Message = [Started container filler-pod-d5cb1bd4-733b-49d4-8f70-afc47d673d1a] STEP: Considering event: Type = [Warning], Name = [additional-pod.160ee03f16b0df26], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node kali-worker2 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node kali-worker STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:04:41.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-930" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:8.254 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":275,"completed":30,"skipped":408,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:04:41.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-03590101-34bf-4519-95d7-db3ebff1dac0 STEP: Creating configMap with name cm-test-opt-upd-5f0b4759-d493-452f-838d-c090ff7c5102 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-03590101-34bf-4519-95d7-db3ebff1dac0 STEP: Updating configmap cm-test-opt-upd-5f0b4759-d493-452f-838d-c090ff7c5102 STEP: Creating configMap with name cm-test-opt-create-a32ff30e-ae3c-44c5-a37b-df7b815c5835 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:06:01.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1107" for this suite. • [SLOW TEST:80.840 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":31,"skipped":442,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:06:01.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:06:18.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8404" for this suite. • [SLOW TEST:16.838 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":32,"skipped":464,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:06:18.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod May 14 11:06:20.034: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:06:28.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4070" for this suite. • [SLOW TEST:10.167 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":33,"skipped":474,"failed":0} SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:06:29.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod test-webserver-628ab68e-afa2-4388-95eb-9e8e5190232e in namespace container-probe-414 May 14 11:06:33.176: INFO: Started pod test-webserver-628ab68e-afa2-4388-95eb-9e8e5190232e in namespace container-probe-414 STEP: checking the pod's current state and verifying that restartCount is present May 14 11:06:33.179: INFO: Initial restart count of pod test-webserver-628ab68e-afa2-4388-95eb-9e8e5190232e is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:10:34.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-414" for this suite. • [SLOW TEST:245.169 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":34,"skipped":477,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:10:34.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium May 14 11:10:34.654: INFO: Waiting up to 5m0s for pod "pod-ffcaf7d2-cd7d-49bf-89f7-6cf308d66452" in namespace "emptydir-7597" to be "Succeeded or Failed" May 14 11:10:34.670: INFO: Pod "pod-ffcaf7d2-cd7d-49bf-89f7-6cf308d66452": Phase="Pending", Reason="", readiness=false. Elapsed: 15.751677ms May 14 11:10:36.938: INFO: Pod "pod-ffcaf7d2-cd7d-49bf-89f7-6cf308d66452": Phase="Pending", Reason="", readiness=false. Elapsed: 2.284395201s May 14 11:10:38.942: INFO: Pod "pod-ffcaf7d2-cd7d-49bf-89f7-6cf308d66452": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.287989126s STEP: Saw pod success May 14 11:10:38.942: INFO: Pod "pod-ffcaf7d2-cd7d-49bf-89f7-6cf308d66452" satisfied condition "Succeeded or Failed" May 14 11:10:38.944: INFO: Trying to get logs from node kali-worker pod pod-ffcaf7d2-cd7d-49bf-89f7-6cf308d66452 container test-container: STEP: delete the pod May 14 11:10:39.319: INFO: Waiting for pod pod-ffcaf7d2-cd7d-49bf-89f7-6cf308d66452 to disappear May 14 11:10:39.322: INFO: Pod pod-ffcaf7d2-cd7d-49bf-89f7-6cf308d66452 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:10:39.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7597" for this suite. • [SLOW TEST:5.158 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":35,"skipped":485,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:10:39.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 14 11:10:39.980: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 14 11:10:41.989: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051439, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051439, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051440, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051439, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 14 11:10:45.044: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:10:45.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9448" for this suite. STEP: Destroying namespace "webhook-9448-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.537 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":36,"skipped":490,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:10:45.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-6d9946d6-79f6-450e-b4d6-58581aae08b5 STEP: Creating a pod to test consume configMaps May 14 11:10:45.972: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3fafb058-0cef-4b5e-abd4-58063d9c1cc1" in namespace "projected-7891" to be "Succeeded or Failed" May 14 11:10:45.994: INFO: Pod "pod-projected-configmaps-3fafb058-0cef-4b5e-abd4-58063d9c1cc1": Phase="Pending", Reason="", readiness=false. Elapsed: 22.08092ms May 14 11:10:48.218: INFO: Pod "pod-projected-configmaps-3fafb058-0cef-4b5e-abd4-58063d9c1cc1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2462849s May 14 11:10:50.247: INFO: Pod "pod-projected-configmaps-3fafb058-0cef-4b5e-abd4-58063d9c1cc1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.274822391s May 14 11:10:52.250: INFO: Pod "pod-projected-configmaps-3fafb058-0cef-4b5e-abd4-58063d9c1cc1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.278273282s STEP: Saw pod success May 14 11:10:52.250: INFO: Pod "pod-projected-configmaps-3fafb058-0cef-4b5e-abd4-58063d9c1cc1" satisfied condition "Succeeded or Failed" May 14 11:10:52.252: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-3fafb058-0cef-4b5e-abd4-58063d9c1cc1 container projected-configmap-volume-test: STEP: delete the pod May 14 11:10:52.303: INFO: Waiting for pod pod-projected-configmaps-3fafb058-0cef-4b5e-abd4-58063d9c1cc1 to disappear May 14 11:10:52.439: INFO: Pod pod-projected-configmaps-3fafb058-0cef-4b5e-abd4-58063d9c1cc1 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:10:52.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7891" for this suite. • [SLOW TEST:6.576 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":37,"skipped":494,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:10:52.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 14 11:10:52.688: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3180' May 14 11:10:56.290: INFO: stderr: "" May 14 11:10:56.290: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423 May 14 11:10:56.342: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3180' May 14 11:11:03.427: INFO: stderr: "" May 14 11:11:03.427: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:11:03.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3180" for this suite. • [SLOW TEST:10.993 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":275,"completed":38,"skipped":505,"failed":0} SS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:11:03.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 14 11:11:03.562: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 14 11:11:08.565: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 14 11:11:08.565: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 14 11:11:10.570: INFO: Creating deployment "test-rollover-deployment" May 14 11:11:10.611: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 14 11:11:12.617: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 14 11:11:12.623: INFO: Ensure that both replica sets have 1 created replica May 14 11:11:12.628: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 14 11:11:12.635: INFO: Updating deployment test-rollover-deployment May 14 11:11:12.635: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 14 11:11:14.680: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 14 11:11:14.687: INFO: Make sure deployment "test-rollover-deployment" is complete May 14 11:11:14.692: INFO: all replica sets need to contain the pod-template-hash label May 14 11:11:14.692: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051470, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051470, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051472, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051470, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 11:11:16.700: INFO: all replica sets need to contain the pod-template-hash label May 14 11:11:16.700: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051470, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051470, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051472, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051470, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 11:11:18.700: INFO: all replica sets need to contain the pod-template-hash label May 14 11:11:18.700: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051470, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051470, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051476, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051470, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 11:11:20.701: INFO: all replica sets need to contain the pod-template-hash label May 14 11:11:20.701: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051470, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051470, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051476, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051470, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 11:11:22.700: INFO: all replica sets need to contain the pod-template-hash label May 14 11:11:22.700: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051470, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051470, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051476, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051470, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 11:11:24.728: INFO: all replica sets need to contain the pod-template-hash label May 14 11:11:24.728: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051470, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051470, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051476, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051470, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 11:11:26.702: INFO: all replica sets need to contain the pod-template-hash label May 14 11:11:26.702: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051470, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051470, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051476, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051470, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 11:11:28.700: INFO: May 14 11:11:28.700: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 May 14 11:11:28.709: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-9740 /apis/apps/v1/namespaces/deployment-9740/deployments/test-rollover-deployment c9e3a5d3-35c4-4056-8fb3-fc1a38af25ea 4268915 2 2020-05-14 11:11:10 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-14 11:11:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-14 11:11:27 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0029e40a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-14 11:11:10 +0000 UTC,LastTransitionTime:2020-05-14 11:11:10 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-84f7f6f64b" has successfully progressed.,LastUpdateTime:2020-05-14 11:11:27 +0000 UTC,LastTransitionTime:2020-05-14 11:11:10 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 14 11:11:28.713: INFO: New ReplicaSet "test-rollover-deployment-84f7f6f64b" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-84f7f6f64b deployment-9740 /apis/apps/v1/namespaces/deployment-9740/replicasets/test-rollover-deployment-84f7f6f64b a9702967-be88-49b8-a4bf-9e8ae67f62bb 4268904 2 2020-05-14 11:11:12 +0000 UTC map[name:rollover-pod pod-template-hash:84f7f6f64b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment c9e3a5d3-35c4-4056-8fb3-fc1a38af25ea 0xc0029e46c7 0xc0029e46c8}] [] [{kube-controller-manager Update apps/v1 2020-05-14 11:11:26 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 57 101 51 97 53 100 51 45 51 53 99 52 45 52 48 53 54 45 56 102 98 51 45 102 99 49 97 51 56 97 102 50 53 101 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 84f7f6f64b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0029e4758 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 14 11:11:28.713: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 14 11:11:28.714: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9740 /apis/apps/v1/namespaces/deployment-9740/replicasets/test-rollover-controller 690f3873-dad0-4f4d-9026-47bdd9016370 4268914 2 2020-05-14 11:11:03 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment c9e3a5d3-35c4-4056-8fb3-fc1a38af25ea 0xc0029e44af 0xc0029e44c0}] [] [{e2e.test Update apps/v1 2020-05-14 11:11:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-14 11:11:27 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 57 101 51 97 53 100 51 45 51 53 99 52 45 52 48 53 54 45 56 102 98 51 45 102 99 49 97 51 56 97 102 50 53 101 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0029e4558 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 14 11:11:28.714: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5 deployment-9740 /apis/apps/v1/namespaces/deployment-9740/replicasets/test-rollover-deployment-5686c4cfd5 bd21b39a-03ea-4885-8efa-397996ab0b91 4268850 2 2020-05-14 11:11:10 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment c9e3a5d3-35c4-4056-8fb3-fc1a38af25ea 0xc0029e45c7 0xc0029e45c8}] [] [{kube-controller-manager Update apps/v1 2020-05-14 11:11:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 57 101 51 97 53 100 51 45 51 53 99 52 45 52 48 53 54 45 56 102 98 51 45 102 99 49 97 51 56 97 102 50 53 101 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 114 101 100 105 115 45 115 108 97 118 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0029e4658 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 14 11:11:28.717: INFO: Pod "test-rollover-deployment-84f7f6f64b-lhwnf" is available: &Pod{ObjectMeta:{test-rollover-deployment-84f7f6f64b-lhwnf test-rollover-deployment-84f7f6f64b- deployment-9740 /api/v1/namespaces/deployment-9740/pods/test-rollover-deployment-84f7f6f64b-lhwnf 4139b446-a4ff-48e8-929a-e224b78edb7a 4268872 0 2020-05-14 11:11:12 +0000 UTC map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [{apps/v1 ReplicaSet test-rollover-deployment-84f7f6f64b a9702967-be88-49b8-a4bf-9e8ae67f62bb 0xc00294f087 0xc00294f088}] [] [{kube-controller-manager Update v1 2020-05-14 11:11:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 57 55 48 50 57 54 55 45 98 101 56 56 45 52 57 98 56 45 97 52 98 102 45 57 101 56 97 101 54 55 102 54 50 98 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-14 11:11:16 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 48 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tnxw8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tnxw8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tnxw8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:11:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:11:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:11:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:11:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.101,StartTime:2020-05-14 11:11:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-14 11:11:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://a4c91b589db8c27b6a2a6f3494495fbe9a34d1701e162c034ce8e750f8d898b6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.101,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:11:28.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9740" for this suite. • [SLOW TEST:25.284 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":39,"skipped":507,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:11:28.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 14 11:11:29.251: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 14 11:11:31.261: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051489, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051489, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051489, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051489, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 11:11:33.265: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051489, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051489, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051489, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051489, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 14 11:11:36.327: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:11:36.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8955" for this suite. STEP: Destroying namespace "webhook-8955-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.733 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":40,"skipped":508,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:11:36.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-acc033c9-c690-450d-83b8-ee65d8a8d28d STEP: Creating a pod to test consume secrets May 14 11:11:36.653: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ad4c471e-c0b9-4b48-92aa-ec945ecd3b7a" in namespace "projected-8674" to be "Succeeded or Failed" May 14 11:11:36.726: INFO: Pod "pod-projected-secrets-ad4c471e-c0b9-4b48-92aa-ec945ecd3b7a": Phase="Pending", Reason="", readiness=false. Elapsed: 73.720949ms May 14 11:11:38.731: INFO: Pod "pod-projected-secrets-ad4c471e-c0b9-4b48-92aa-ec945ecd3b7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078521153s May 14 11:11:40.769: INFO: Pod "pod-projected-secrets-ad4c471e-c0b9-4b48-92aa-ec945ecd3b7a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116664276s May 14 11:11:42.772: INFO: Pod "pod-projected-secrets-ad4c471e-c0b9-4b48-92aa-ec945ecd3b7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.119392327s STEP: Saw pod success May 14 11:11:42.772: INFO: Pod "pod-projected-secrets-ad4c471e-c0b9-4b48-92aa-ec945ecd3b7a" satisfied condition "Succeeded or Failed" May 14 11:11:42.774: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-ad4c471e-c0b9-4b48-92aa-ec945ecd3b7a container projected-secret-volume-test: STEP: delete the pod May 14 11:11:42.802: INFO: Waiting for pod pod-projected-secrets-ad4c471e-c0b9-4b48-92aa-ec945ecd3b7a to disappear May 14 11:11:42.818: INFO: Pod pod-projected-secrets-ad4c471e-c0b9-4b48-92aa-ec945ecd3b7a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:11:42.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8674" for this suite. • [SLOW TEST:6.369 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":41,"skipped":513,"failed":0} SSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:11:42.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token STEP: reading a file in the container May 14 11:11:47.459: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5649 pod-service-account-5a2c1aa8-f7a0-41b3-84ac-12b8ab4e75b7 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 14 11:11:47.633: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5649 pod-service-account-5a2c1aa8-f7a0-41b3-84ac-12b8ab4e75b7 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 14 11:11:47.851: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5649 pod-service-account-5a2c1aa8-f7a0-41b3-84ac-12b8ab4e75b7 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:11:48.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5649" for this suite. • [SLOW TEST:5.251 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":275,"completed":42,"skipped":522,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:11:48.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 14 11:11:48.142: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9e727ab2-da24-4866-84db-05c42df96c82" in namespace "projected-7450" to be "Succeeded or Failed" May 14 11:11:48.188: INFO: Pod "downwardapi-volume-9e727ab2-da24-4866-84db-05c42df96c82": Phase="Pending", Reason="", readiness=false. Elapsed: 45.649502ms May 14 11:11:50.313: INFO: Pod "downwardapi-volume-9e727ab2-da24-4866-84db-05c42df96c82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.171386987s May 14 11:11:52.318: INFO: Pod "downwardapi-volume-9e727ab2-da24-4866-84db-05c42df96c82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.176075146s STEP: Saw pod success May 14 11:11:52.318: INFO: Pod "downwardapi-volume-9e727ab2-da24-4866-84db-05c42df96c82" satisfied condition "Succeeded or Failed" May 14 11:11:52.322: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-9e727ab2-da24-4866-84db-05c42df96c82 container client-container: STEP: delete the pod May 14 11:11:52.388: INFO: Waiting for pod downwardapi-volume-9e727ab2-da24-4866-84db-05c42df96c82 to disappear May 14 11:11:52.421: INFO: Pod downwardapi-volume-9e727ab2-da24-4866-84db-05c42df96c82 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:11:52.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7450" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":43,"skipped":532,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:11:52.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:12:09.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9736" for this suite. • [SLOW TEST:17.346 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":44,"skipped":550,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:12:09.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 14 11:12:11.810: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 14 11:12:13.822: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051532, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051532, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051532, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051531, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 11:12:15.843: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051532, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051532, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051532, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051531, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 14 11:12:18.857: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 14 11:12:18.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2640-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:12:19.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6290" for this suite. STEP: Destroying namespace "webhook-6290-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.351 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":45,"skipped":563,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:12:20.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1137.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1137.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1137.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1137.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 14 11:12:26.285: INFO: DNS probes using dns-test-9b9e7413-453f-4d9c-a0a5-8b28b14da246 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1137.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1137.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1137.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1137.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 14 11:12:36.399: INFO: File wheezy_udp@dns-test-service-3.dns-1137.svc.cluster.local from pod dns-1137/dns-test-2a379cc8-9260-4e2e-a768-a7f6bad865c6 contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 11:12:36.402: INFO: File jessie_udp@dns-test-service-3.dns-1137.svc.cluster.local from pod dns-1137/dns-test-2a379cc8-9260-4e2e-a768-a7f6bad865c6 contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 11:12:36.402: INFO: Lookups using dns-1137/dns-test-2a379cc8-9260-4e2e-a768-a7f6bad865c6 failed for: [wheezy_udp@dns-test-service-3.dns-1137.svc.cluster.local jessie_udp@dns-test-service-3.dns-1137.svc.cluster.local] May 14 11:12:41.451: INFO: File wheezy_udp@dns-test-service-3.dns-1137.svc.cluster.local from pod dns-1137/dns-test-2a379cc8-9260-4e2e-a768-a7f6bad865c6 contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 11:12:41.453: INFO: File jessie_udp@dns-test-service-3.dns-1137.svc.cluster.local from pod dns-1137/dns-test-2a379cc8-9260-4e2e-a768-a7f6bad865c6 contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 11:12:41.453: INFO: Lookups using dns-1137/dns-test-2a379cc8-9260-4e2e-a768-a7f6bad865c6 failed for: [wheezy_udp@dns-test-service-3.dns-1137.svc.cluster.local jessie_udp@dns-test-service-3.dns-1137.svc.cluster.local] May 14 11:12:46.407: INFO: File wheezy_udp@dns-test-service-3.dns-1137.svc.cluster.local from pod dns-1137/dns-test-2a379cc8-9260-4e2e-a768-a7f6bad865c6 contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 11:12:46.409: INFO: File jessie_udp@dns-test-service-3.dns-1137.svc.cluster.local from pod dns-1137/dns-test-2a379cc8-9260-4e2e-a768-a7f6bad865c6 contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 11:12:46.409: INFO: Lookups using dns-1137/dns-test-2a379cc8-9260-4e2e-a768-a7f6bad865c6 failed for: [wheezy_udp@dns-test-service-3.dns-1137.svc.cluster.local jessie_udp@dns-test-service-3.dns-1137.svc.cluster.local] May 14 11:12:51.406: INFO: File wheezy_udp@dns-test-service-3.dns-1137.svc.cluster.local from pod dns-1137/dns-test-2a379cc8-9260-4e2e-a768-a7f6bad865c6 contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 11:12:51.409: INFO: File jessie_udp@dns-test-service-3.dns-1137.svc.cluster.local from pod dns-1137/dns-test-2a379cc8-9260-4e2e-a768-a7f6bad865c6 contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 11:12:51.409: INFO: Lookups using dns-1137/dns-test-2a379cc8-9260-4e2e-a768-a7f6bad865c6 failed for: [wheezy_udp@dns-test-service-3.dns-1137.svc.cluster.local jessie_udp@dns-test-service-3.dns-1137.svc.cluster.local] May 14 11:12:56.409: INFO: DNS probes using dns-test-2a379cc8-9260-4e2e-a768-a7f6bad865c6 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1137.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1137.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1137.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-1137.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 14 11:13:03.715: INFO: DNS probes using dns-test-6543e131-9ed2-4249-b783-c05b7e4e45b1 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:13:04.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1137" for this suite. • [SLOW TEST:43.969 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":46,"skipped":573,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:13:04.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC May 14 11:13:04.459: INFO: namespace kubectl-4549 May 14 11:13:04.459: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4549' May 14 11:13:04.879: INFO: stderr: "" May 14 11:13:04.879: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 14 11:13:05.942: INFO: Selector matched 1 pods for map[app:agnhost] May 14 11:13:05.942: INFO: Found 0 / 1 May 14 11:13:07.205: INFO: Selector matched 1 pods for map[app:agnhost] May 14 11:13:07.205: INFO: Found 0 / 1 May 14 11:13:07.883: INFO: Selector matched 1 pods for map[app:agnhost] May 14 11:13:07.883: INFO: Found 0 / 1 May 14 11:13:08.947: INFO: Selector matched 1 pods for map[app:agnhost] May 14 11:13:08.947: INFO: Found 0 / 1 May 14 11:13:09.889: INFO: Selector matched 1 pods for map[app:agnhost] May 14 11:13:09.889: INFO: Found 1 / 1 May 14 11:13:09.889: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 14 11:13:09.953: INFO: Selector matched 1 pods for map[app:agnhost] May 14 11:13:09.953: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 14 11:13:09.953: INFO: wait on agnhost-master startup in kubectl-4549 May 14 11:13:09.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs agnhost-master-8qv5m agnhost-master --namespace=kubectl-4549' May 14 11:13:10.227: INFO: stderr: "" May 14 11:13:10.227: INFO: stdout: "Paused\n" STEP: exposing RC May 14 11:13:10.227: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-4549' May 14 11:13:10.429: INFO: stderr: "" May 14 11:13:10.429: INFO: stdout: "service/rm2 exposed\n" May 14 11:13:10.557: INFO: Service rm2 in namespace kubectl-4549 found. STEP: exposing service May 14 11:13:12.561: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-4549' May 14 11:13:12.751: INFO: stderr: "" May 14 11:13:12.751: INFO: stdout: "service/rm3 exposed\n" May 14 11:13:12.777: INFO: Service rm3 in namespace kubectl-4549 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:13:14.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4549" for this suite. • [SLOW TEST:10.694 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":275,"completed":47,"skipped":624,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:13:14.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-6ce67ae8-401b-4a84-93fc-23de9cdd80cb STEP: Creating a pod to test consume configMaps May 14 11:13:15.081: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-71ee466e-246b-43d8-9bdc-01f99f5ba175" in namespace "projected-4320" to be "Succeeded or Failed" May 14 11:13:15.111: INFO: Pod "pod-projected-configmaps-71ee466e-246b-43d8-9bdc-01f99f5ba175": Phase="Pending", Reason="", readiness=false. Elapsed: 29.781164ms May 14 11:13:17.114: INFO: Pod "pod-projected-configmaps-71ee466e-246b-43d8-9bdc-01f99f5ba175": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033486876s May 14 11:13:19.119: INFO: Pod "pod-projected-configmaps-71ee466e-246b-43d8-9bdc-01f99f5ba175": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037579649s May 14 11:13:21.144: INFO: Pod "pod-projected-configmaps-71ee466e-246b-43d8-9bdc-01f99f5ba175": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.063560461s STEP: Saw pod success May 14 11:13:21.145: INFO: Pod "pod-projected-configmaps-71ee466e-246b-43d8-9bdc-01f99f5ba175" satisfied condition "Succeeded or Failed" May 14 11:13:21.147: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-71ee466e-246b-43d8-9bdc-01f99f5ba175 container projected-configmap-volume-test: STEP: delete the pod May 14 11:13:21.384: INFO: Waiting for pod pod-projected-configmaps-71ee466e-246b-43d8-9bdc-01f99f5ba175 to disappear May 14 11:13:21.387: INFO: Pod pod-projected-configmaps-71ee466e-246b-43d8-9bdc-01f99f5ba175 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:13:21.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4320" for this suite. • [SLOW TEST:6.676 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":48,"skipped":638,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:13:21.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 14 11:13:30.479: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 14 11:13:30.540: INFO: Pod pod-with-poststart-http-hook still exists May 14 11:13:32.540: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 14 11:13:32.545: INFO: Pod pod-with-poststart-http-hook still exists May 14 11:13:34.541: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 14 11:13:34.543: INFO: Pod pod-with-poststart-http-hook still exists May 14 11:13:36.540: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 14 11:13:36.544: INFO: Pod pod-with-poststart-http-hook still exists May 14 11:13:38.540: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 14 11:13:38.545: INFO: Pod pod-with-poststart-http-hook still exists May 14 11:13:40.541: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 14 11:13:40.546: INFO: Pod pod-with-poststart-http-hook still exists May 14 11:13:42.540: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 14 11:13:42.545: INFO: Pod pod-with-poststart-http-hook still exists May 14 11:13:44.541: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 14 11:13:44.545: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:13:44.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3580" for this suite. • [SLOW TEST:23.088 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":49,"skipped":646,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:13:44.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 14 11:13:44.625: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4bde55a5-956c-4a82-b5b3-8e08dab47df2" in namespace "projected-2782" to be "Succeeded or Failed" May 14 11:13:44.636: INFO: Pod "downwardapi-volume-4bde55a5-956c-4a82-b5b3-8e08dab47df2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.514675ms May 14 11:13:46.640: INFO: Pod "downwardapi-volume-4bde55a5-956c-4a82-b5b3-8e08dab47df2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014362491s May 14 11:13:48.645: INFO: Pod "downwardapi-volume-4bde55a5-956c-4a82-b5b3-8e08dab47df2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019172827s STEP: Saw pod success May 14 11:13:48.645: INFO: Pod "downwardapi-volume-4bde55a5-956c-4a82-b5b3-8e08dab47df2" satisfied condition "Succeeded or Failed" May 14 11:13:48.649: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-4bde55a5-956c-4a82-b5b3-8e08dab47df2 container client-container: STEP: delete the pod May 14 11:13:48.781: INFO: Waiting for pod downwardapi-volume-4bde55a5-956c-4a82-b5b3-8e08dab47df2 to disappear May 14 11:13:48.794: INFO: Pod downwardapi-volume-4bde55a5-956c-4a82-b5b3-8e08dab47df2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:13:48.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2782" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":50,"skipped":699,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:13:48.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 14 11:13:48.944: INFO: Creating ReplicaSet my-hostname-basic-b2ba441f-833b-456d-9553-9dfed8ebc25d May 14 11:13:48.978: INFO: Pod name my-hostname-basic-b2ba441f-833b-456d-9553-9dfed8ebc25d: Found 0 pods out of 1 May 14 11:13:53.989: INFO: Pod name my-hostname-basic-b2ba441f-833b-456d-9553-9dfed8ebc25d: Found 1 pods out of 1 May 14 11:13:53.989: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-b2ba441f-833b-456d-9553-9dfed8ebc25d" is running May 14 11:13:54.004: INFO: Pod "my-hostname-basic-b2ba441f-833b-456d-9553-9dfed8ebc25d-mld24" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 11:13:49 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 11:13:52 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 11:13:52 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 11:13:48 +0000 UTC Reason: Message:}]) May 14 11:13:54.004: INFO: Trying to dial the pod May 14 11:13:59.014: INFO: Controller my-hostname-basic-b2ba441f-833b-456d-9553-9dfed8ebc25d: Got expected result from replica 1 [my-hostname-basic-b2ba441f-833b-456d-9553-9dfed8ebc25d-mld24]: "my-hostname-basic-b2ba441f-833b-456d-9553-9dfed8ebc25d-mld24", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:13:59.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9075" for this suite. • [SLOW TEST:10.204 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":51,"skipped":715,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:13:59.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 14 11:13:59.141: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 14 11:14:01.085: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-643 create -f -' May 14 11:14:07.530: INFO: stderr: "" May 14 11:14:07.530: INFO: stdout: "e2e-test-crd-publish-openapi-4175-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 14 11:14:07.531: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-643 delete e2e-test-crd-publish-openapi-4175-crds test-cr' May 14 11:14:07.713: INFO: stderr: "" May 14 11:14:07.713: INFO: stdout: "e2e-test-crd-publish-openapi-4175-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 14 11:14:07.713: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-643 apply -f -' May 14 11:14:08.139: INFO: stderr: "" May 14 11:14:08.139: INFO: stdout: "e2e-test-crd-publish-openapi-4175-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 14 11:14:08.139: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-643 delete e2e-test-crd-publish-openapi-4175-crds test-cr' May 14 11:14:08.271: INFO: stderr: "" May 14 11:14:08.271: INFO: stdout: "e2e-test-crd-publish-openapi-4175-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 14 11:14:08.271: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4175-crds' May 14 11:14:08.510: INFO: stderr: "" May 14 11:14:08.510: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4175-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:14:11.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-643" for this suite. • [SLOW TEST:12.419 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":52,"skipped":753,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:14:11.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium May 14 11:14:11.540: INFO: Waiting up to 5m0s for pod "pod-9b2801bf-07ae-4035-87e8-35fdcfc30e3b" in namespace "emptydir-24" to be "Succeeded or Failed" May 14 11:14:11.543: INFO: Pod "pod-9b2801bf-07ae-4035-87e8-35fdcfc30e3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.494446ms May 14 11:14:13.547: INFO: Pod "pod-9b2801bf-07ae-4035-87e8-35fdcfc30e3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00729906s May 14 11:14:15.587: INFO: Pod "pod-9b2801bf-07ae-4035-87e8-35fdcfc30e3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047819396s STEP: Saw pod success May 14 11:14:15.587: INFO: Pod "pod-9b2801bf-07ae-4035-87e8-35fdcfc30e3b" satisfied condition "Succeeded or Failed" May 14 11:14:15.590: INFO: Trying to get logs from node kali-worker pod pod-9b2801bf-07ae-4035-87e8-35fdcfc30e3b container test-container: STEP: delete the pod May 14 11:14:16.014: INFO: Waiting for pod pod-9b2801bf-07ae-4035-87e8-35fdcfc30e3b to disappear May 14 11:14:16.378: INFO: Pod pod-9b2801bf-07ae-4035-87e8-35fdcfc30e3b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:14:16.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-24" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":53,"skipped":756,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:14:16.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:14:21.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1569" for this suite. • [SLOW TEST:5.347 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":54,"skipped":759,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:14:21.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:14:32.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2306" for this suite. • [SLOW TEST:11.148 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":55,"skipped":775,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:14:32.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override command May 14 11:14:32.985: INFO: Waiting up to 5m0s for pod "client-containers-4a50c929-cbf1-49fc-90b9-c5d3dfa9fbf0" in namespace "containers-6307" to be "Succeeded or Failed" May 14 11:14:32.992: INFO: Pod "client-containers-4a50c929-cbf1-49fc-90b9-c5d3dfa9fbf0": Phase="Pending", Reason="", readiness=false. Elapsed: 7.635241ms May 14 11:14:35.228: INFO: Pod "client-containers-4a50c929-cbf1-49fc-90b9-c5d3dfa9fbf0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.243467744s May 14 11:14:37.366: INFO: Pod "client-containers-4a50c929-cbf1-49fc-90b9-c5d3dfa9fbf0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.38095679s May 14 11:14:39.371: INFO: Pod "client-containers-4a50c929-cbf1-49fc-90b9-c5d3dfa9fbf0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.385950434s STEP: Saw pod success May 14 11:14:39.371: INFO: Pod "client-containers-4a50c929-cbf1-49fc-90b9-c5d3dfa9fbf0" satisfied condition "Succeeded or Failed" May 14 11:14:39.374: INFO: Trying to get logs from node kali-worker2 pod client-containers-4a50c929-cbf1-49fc-90b9-c5d3dfa9fbf0 container test-container: STEP: delete the pod May 14 11:14:39.389: INFO: Waiting for pod client-containers-4a50c929-cbf1-49fc-90b9-c5d3dfa9fbf0 to disappear May 14 11:14:39.401: INFO: Pod client-containers-4a50c929-cbf1-49fc-90b9-c5d3dfa9fbf0 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:14:39.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6307" for this suite. • [SLOW TEST:6.528 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":56,"skipped":787,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:14:39.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-783a1c8a-0498-49ed-889f-cc8510f0bc5c STEP: Creating a pod to test consume configMaps May 14 11:14:39.555: INFO: Waiting up to 5m0s for pod "pod-configmaps-e5beb1fd-d9f3-43e0-b8a4-522f68ff6e36" in namespace "configmap-6432" to be "Succeeded or Failed" May 14 11:14:39.571: INFO: Pod "pod-configmaps-e5beb1fd-d9f3-43e0-b8a4-522f68ff6e36": Phase="Pending", Reason="", readiness=false. Elapsed: 15.554032ms May 14 11:14:41.574: INFO: Pod "pod-configmaps-e5beb1fd-d9f3-43e0-b8a4-522f68ff6e36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018912361s May 14 11:14:43.577: INFO: Pod "pod-configmaps-e5beb1fd-d9f3-43e0-b8a4-522f68ff6e36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021839189s STEP: Saw pod success May 14 11:14:43.577: INFO: Pod "pod-configmaps-e5beb1fd-d9f3-43e0-b8a4-522f68ff6e36" satisfied condition "Succeeded or Failed" May 14 11:14:43.578: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-e5beb1fd-d9f3-43e0-b8a4-522f68ff6e36 container configmap-volume-test: STEP: delete the pod May 14 11:14:43.623: INFO: Waiting for pod pod-configmaps-e5beb1fd-d9f3-43e0-b8a4-522f68ff6e36 to disappear May 14 11:14:43.773: INFO: Pod pod-configmaps-e5beb1fd-d9f3-43e0-b8a4-522f68ff6e36 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:14:43.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6432" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":57,"skipped":789,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:14:43.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service nodeport-test with type=NodePort in namespace services-44 STEP: creating replication controller nodeport-test in namespace services-44 I0514 11:14:44.143991 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-44, replica count: 2 I0514 11:14:47.194436 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 11:14:50.194685 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 14 11:14:50.194: INFO: Creating new exec pod May 14 11:14:57.306: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-44 execpodzhlmk -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 14 11:14:57.601: INFO: stderr: "I0514 11:14:57.437300 906 log.go:172] (0xc0009c2370) (0xc000ad81e0) Create stream\nI0514 11:14:57.437347 906 log.go:172] (0xc0009c2370) (0xc000ad81e0) Stream added, broadcasting: 1\nI0514 11:14:57.439880 906 log.go:172] (0xc0009c2370) Reply frame received for 1\nI0514 11:14:57.439941 906 log.go:172] (0xc0009c2370) (0xc00079f4a0) Create stream\nI0514 11:14:57.439963 906 log.go:172] (0xc0009c2370) (0xc00079f4a0) Stream added, broadcasting: 3\nI0514 11:14:57.440764 906 log.go:172] (0xc0009c2370) Reply frame received for 3\nI0514 11:14:57.440789 906 log.go:172] (0xc0009c2370) (0xc000ad8280) Create stream\nI0514 11:14:57.440797 906 log.go:172] (0xc0009c2370) (0xc000ad8280) Stream added, broadcasting: 5\nI0514 11:14:57.441645 906 log.go:172] (0xc0009c2370) Reply frame received for 5\nI0514 11:14:57.568334 906 log.go:172] (0xc0009c2370) Data frame received for 5\nI0514 11:14:57.568360 906 log.go:172] (0xc000ad8280) (5) Data frame handling\nI0514 11:14:57.568377 906 log.go:172] (0xc000ad8280) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0514 11:14:57.592969 906 log.go:172] (0xc0009c2370) Data frame received for 5\nI0514 11:14:57.592992 906 log.go:172] (0xc000ad8280) (5) Data frame handling\nI0514 11:14:57.593005 906 log.go:172] (0xc000ad8280) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0514 11:14:57.593741 906 log.go:172] (0xc0009c2370) Data frame received for 5\nI0514 11:14:57.593766 906 log.go:172] (0xc000ad8280) (5) Data frame handling\nI0514 11:14:57.593796 906 log.go:172] (0xc0009c2370) Data frame received for 3\nI0514 11:14:57.593829 906 log.go:172] (0xc00079f4a0) (3) Data frame handling\nI0514 11:14:57.595522 906 log.go:172] (0xc0009c2370) Data frame received for 1\nI0514 11:14:57.595561 906 log.go:172] (0xc000ad81e0) (1) Data frame handling\nI0514 11:14:57.595591 906 log.go:172] (0xc000ad81e0) (1) Data frame sent\nI0514 11:14:57.595632 906 log.go:172] (0xc0009c2370) (0xc000ad81e0) Stream removed, broadcasting: 1\nI0514 11:14:57.595770 906 log.go:172] (0xc0009c2370) Go away received\nI0514 11:14:57.596263 906 log.go:172] (0xc0009c2370) (0xc000ad81e0) Stream removed, broadcasting: 1\nI0514 11:14:57.596300 906 log.go:172] (0xc0009c2370) (0xc00079f4a0) Stream removed, broadcasting: 3\nI0514 11:14:57.596315 906 log.go:172] (0xc0009c2370) (0xc000ad8280) Stream removed, broadcasting: 5\n" May 14 11:14:57.601: INFO: stdout: "" May 14 11:14:57.602: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-44 execpodzhlmk -- /bin/sh -x -c nc -zv -t -w 2 10.97.179.217 80' May 14 11:14:57.821: INFO: stderr: "I0514 11:14:57.741613 926 log.go:172] (0xc0008eedc0) (0xc00053a3c0) Create stream\nI0514 11:14:57.741664 926 log.go:172] (0xc0008eedc0) (0xc00053a3c0) Stream added, broadcasting: 1\nI0514 11:14:57.744148 926 log.go:172] (0xc0008eedc0) Reply frame received for 1\nI0514 11:14:57.744195 926 log.go:172] (0xc0008eedc0) (0xc0008e4e60) Create stream\nI0514 11:14:57.744208 926 log.go:172] (0xc0008eedc0) (0xc0008e4e60) Stream added, broadcasting: 3\nI0514 11:14:57.745381 926 log.go:172] (0xc0008eedc0) Reply frame received for 3\nI0514 11:14:57.745435 926 log.go:172] (0xc0008eedc0) (0xc000331040) Create stream\nI0514 11:14:57.745456 926 log.go:172] (0xc0008eedc0) (0xc000331040) Stream added, broadcasting: 5\nI0514 11:14:57.746394 926 log.go:172] (0xc0008eedc0) Reply frame received for 5\nI0514 11:14:57.814504 926 log.go:172] (0xc0008eedc0) Data frame received for 5\nI0514 11:14:57.814557 926 log.go:172] (0xc000331040) (5) Data frame handling\nI0514 11:14:57.814578 926 log.go:172] (0xc000331040) (5) Data frame sent\nI0514 11:14:57.814591 926 log.go:172] (0xc0008eedc0) Data frame received for 5\nI0514 11:14:57.814598 926 log.go:172] (0xc000331040) (5) Data frame handling\n+ nc -zv -t -w 2 10.97.179.217 80\nConnection to 10.97.179.217 80 port [tcp/http] succeeded!\nI0514 11:14:57.814626 926 log.go:172] (0xc0008eedc0) Data frame received for 3\nI0514 11:14:57.814634 926 log.go:172] (0xc0008e4e60) (3) Data frame handling\nI0514 11:14:57.815860 926 log.go:172] (0xc0008eedc0) Data frame received for 1\nI0514 11:14:57.815877 926 log.go:172] (0xc00053a3c0) (1) Data frame handling\nI0514 11:14:57.815895 926 log.go:172] (0xc00053a3c0) (1) Data frame sent\nI0514 11:14:57.815993 926 log.go:172] (0xc0008eedc0) (0xc00053a3c0) Stream removed, broadcasting: 1\nI0514 11:14:57.816025 926 log.go:172] (0xc0008eedc0) Go away received\nI0514 11:14:57.816358 926 log.go:172] (0xc0008eedc0) (0xc00053a3c0) Stream removed, broadcasting: 1\nI0514 11:14:57.816374 926 log.go:172] (0xc0008eedc0) (0xc0008e4e60) Stream removed, broadcasting: 3\nI0514 11:14:57.816381 926 log.go:172] (0xc0008eedc0) (0xc000331040) Stream removed, broadcasting: 5\n" May 14 11:14:57.821: INFO: stdout: "" May 14 11:14:57.821: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-44 execpodzhlmk -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.15 30205' May 14 11:14:58.025: INFO: stderr: "I0514 11:14:57.952892 946 log.go:172] (0xc000af22c0) (0xc0005a7540) Create stream\nI0514 11:14:57.952981 946 log.go:172] (0xc000af22c0) (0xc0005a7540) Stream added, broadcasting: 1\nI0514 11:14:57.960436 946 log.go:172] (0xc000af22c0) Reply frame received for 1\nI0514 11:14:57.960486 946 log.go:172] (0xc000af22c0) (0xc00080c000) Create stream\nI0514 11:14:57.960500 946 log.go:172] (0xc000af22c0) (0xc00080c000) Stream added, broadcasting: 3\nI0514 11:14:57.962312 946 log.go:172] (0xc000af22c0) Reply frame received for 3\nI0514 11:14:57.962347 946 log.go:172] (0xc000af22c0) (0xc00080c0a0) Create stream\nI0514 11:14:57.962360 946 log.go:172] (0xc000af22c0) (0xc00080c0a0) Stream added, broadcasting: 5\nI0514 11:14:57.963334 946 log.go:172] (0xc000af22c0) Reply frame received for 5\nI0514 11:14:58.018110 946 log.go:172] (0xc000af22c0) Data frame received for 3\nI0514 11:14:58.018140 946 log.go:172] (0xc00080c000) (3) Data frame handling\nI0514 11:14:58.018167 946 log.go:172] (0xc000af22c0) Data frame received for 5\nI0514 11:14:58.018177 946 log.go:172] (0xc00080c0a0) (5) Data frame handling\nI0514 11:14:58.018188 946 log.go:172] (0xc00080c0a0) (5) Data frame sent\nI0514 11:14:58.018202 946 log.go:172] (0xc000af22c0) Data frame received for 5\nI0514 11:14:58.018211 946 log.go:172] (0xc00080c0a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.15 30205\nConnection to 172.17.0.15 30205 port [tcp/30205] succeeded!\nI0514 11:14:58.019873 946 log.go:172] (0xc000af22c0) Data frame received for 1\nI0514 11:14:58.019906 946 log.go:172] (0xc0005a7540) (1) Data frame handling\nI0514 11:14:58.019924 946 log.go:172] (0xc0005a7540) (1) Data frame sent\nI0514 11:14:58.020020 946 log.go:172] (0xc000af22c0) (0xc0005a7540) Stream removed, broadcasting: 1\nI0514 11:14:58.020064 946 log.go:172] (0xc000af22c0) Go away received\nI0514 11:14:58.020754 946 log.go:172] (0xc000af22c0) (0xc0005a7540) Stream removed, broadcasting: 1\nI0514 11:14:58.020789 946 log.go:172] (0xc000af22c0) (0xc00080c000) Stream removed, broadcasting: 3\nI0514 11:14:58.020807 946 log.go:172] (0xc000af22c0) (0xc00080c0a0) Stream removed, broadcasting: 5\n" May 14 11:14:58.025: INFO: stdout: "" May 14 11:14:58.025: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-44 execpodzhlmk -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 30205' May 14 11:14:58.248: INFO: stderr: "I0514 11:14:58.159651 966 log.go:172] (0xc00003a580) (0xc0006a35e0) Create stream\nI0514 11:14:58.159710 966 log.go:172] (0xc00003a580) (0xc0006a35e0) Stream added, broadcasting: 1\nI0514 11:14:58.162885 966 log.go:172] (0xc00003a580) Reply frame received for 1\nI0514 11:14:58.162956 966 log.go:172] (0xc00003a580) (0xc00058e000) Create stream\nI0514 11:14:58.162981 966 log.go:172] (0xc00003a580) (0xc00058e000) Stream added, broadcasting: 3\nI0514 11:14:58.163902 966 log.go:172] (0xc00003a580) Reply frame received for 3\nI0514 11:14:58.163936 966 log.go:172] (0xc00003a580) (0xc0006a3680) Create stream\nI0514 11:14:58.163947 966 log.go:172] (0xc00003a580) (0xc0006a3680) Stream added, broadcasting: 5\nI0514 11:14:58.164849 966 log.go:172] (0xc00003a580) Reply frame received for 5\nI0514 11:14:58.239760 966 log.go:172] (0xc00003a580) Data frame received for 5\nI0514 11:14:58.239819 966 log.go:172] (0xc0006a3680) (5) Data frame handling\nI0514 11:14:58.239850 966 log.go:172] (0xc0006a3680) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.18 30205\nI0514 11:14:58.240062 966 log.go:172] (0xc00003a580) Data frame received for 5\nI0514 11:14:58.240096 966 log.go:172] (0xc0006a3680) (5) Data frame handling\nI0514 11:14:58.240123 966 log.go:172] (0xc0006a3680) (5) Data frame sent\nConnection to 172.17.0.18 30205 port [tcp/30205] succeeded!\nI0514 11:14:58.240606 966 log.go:172] (0xc00003a580) Data frame received for 5\nI0514 11:14:58.240643 966 log.go:172] (0xc0006a3680) (5) Data frame handling\nI0514 11:14:58.240700 966 log.go:172] (0xc00003a580) Data frame received for 3\nI0514 11:14:58.240732 966 log.go:172] (0xc00058e000) (3) Data frame handling\nI0514 11:14:58.242406 966 log.go:172] (0xc00003a580) Data frame received for 1\nI0514 11:14:58.242425 966 log.go:172] (0xc0006a35e0) (1) Data frame handling\nI0514 11:14:58.242439 966 log.go:172] (0xc0006a35e0) (1) Data frame sent\nI0514 11:14:58.242465 966 log.go:172] (0xc00003a580) (0xc0006a35e0) Stream removed, broadcasting: 1\nI0514 11:14:58.242740 966 log.go:172] (0xc00003a580) Go away received\nI0514 11:14:58.242789 966 log.go:172] (0xc00003a580) (0xc0006a35e0) Stream removed, broadcasting: 1\nI0514 11:14:58.242804 966 log.go:172] (0xc00003a580) (0xc00058e000) Stream removed, broadcasting: 3\nI0514 11:14:58.242812 966 log.go:172] (0xc00003a580) (0xc0006a3680) Stream removed, broadcasting: 5\n" May 14 11:14:58.248: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:14:58.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-44" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:14.458 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":58,"skipped":810,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:14:58.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars May 14 11:14:58.369: INFO: Waiting up to 5m0s for pod "downward-api-99c1cfcf-4108-45f4-a9df-6f9c8f5edd93" in namespace "downward-api-4644" to be "Succeeded or Failed" May 14 11:14:58.373: INFO: Pod "downward-api-99c1cfcf-4108-45f4-a9df-6f9c8f5edd93": Phase="Pending", Reason="", readiness=false. Elapsed: 3.900323ms May 14 11:15:00.443: INFO: Pod "downward-api-99c1cfcf-4108-45f4-a9df-6f9c8f5edd93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074038357s May 14 11:15:02.448: INFO: Pod "downward-api-99c1cfcf-4108-45f4-a9df-6f9c8f5edd93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07892134s STEP: Saw pod success May 14 11:15:02.448: INFO: Pod "downward-api-99c1cfcf-4108-45f4-a9df-6f9c8f5edd93" satisfied condition "Succeeded or Failed" May 14 11:15:02.452: INFO: Trying to get logs from node kali-worker2 pod downward-api-99c1cfcf-4108-45f4-a9df-6f9c8f5edd93 container dapi-container: STEP: delete the pod May 14 11:15:02.532: INFO: Waiting for pod downward-api-99c1cfcf-4108-45f4-a9df-6f9c8f5edd93 to disappear May 14 11:15:02.569: INFO: Pod downward-api-99c1cfcf-4108-45f4-a9df-6f9c8f5edd93 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:15:02.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4644" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":59,"skipped":830,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:15:02.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating pod May 14 11:15:08.733: INFO: Pod pod-hostip-9068f099-c824-4ccd-9033-e0bf94d05cf1 has hostIP: 172.17.0.18 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:15:08.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3864" for this suite. • [SLOW TEST:6.162 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":60,"skipped":843,"failed":0} [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:15:08.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 14 11:15:08.833: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-8385 I0514 11:15:08.886179 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8385, replica count: 1 I0514 11:15:09.936659 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 11:15:10.936985 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 11:15:11.937444 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 11:15:12.937647 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 11:15:13.937852 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 14 11:15:14.342: INFO: Created: latency-svc-bqg4p May 14 11:15:14.349: INFO: Got endpoints: latency-svc-bqg4p [311.005835ms] May 14 11:15:14.467: INFO: Created: latency-svc-w9fbg May 14 11:15:14.472: INFO: Got endpoints: latency-svc-w9fbg [123.098476ms] May 14 11:15:14.493: INFO: Created: latency-svc-gfjld May 14 11:15:14.513: INFO: Got endpoints: latency-svc-gfjld [164.457263ms] May 14 11:15:14.672: INFO: Created: latency-svc-rjwz8 May 14 11:15:14.718: INFO: Got endpoints: latency-svc-rjwz8 [369.403462ms] May 14 11:15:14.755: INFO: Created: latency-svc-vnpf5 May 14 11:15:14.846: INFO: Got endpoints: latency-svc-vnpf5 [496.919182ms] May 14 11:15:14.946: INFO: Created: latency-svc-2btnr May 14 11:15:14.989: INFO: Got endpoints: latency-svc-2btnr [640.522666ms] May 14 11:15:14.989: INFO: Created: latency-svc-mnhkg May 14 11:15:15.020: INFO: Got endpoints: latency-svc-mnhkg [670.778458ms] May 14 11:15:15.091: INFO: Created: latency-svc-phwcp May 14 11:15:15.107: INFO: Got endpoints: latency-svc-phwcp [757.990417ms] May 14 11:15:15.147: INFO: Created: latency-svc-zt7j6 May 14 11:15:15.171: INFO: Got endpoints: latency-svc-zt7j6 [821.991913ms] May 14 11:15:15.229: INFO: Created: latency-svc-zdkm9 May 14 11:15:15.240: INFO: Got endpoints: latency-svc-zdkm9 [891.185951ms] May 14 11:15:15.262: INFO: Created: latency-svc-w7qlm May 14 11:15:15.270: INFO: Got endpoints: latency-svc-w7qlm [920.834516ms] May 14 11:15:15.289: INFO: Created: latency-svc-wzkfz May 14 11:15:15.300: INFO: Got endpoints: latency-svc-wzkfz [950.742255ms] May 14 11:15:15.379: INFO: Created: latency-svc-f6n8c May 14 11:15:15.384: INFO: Got endpoints: latency-svc-f6n8c [1.034763411s] May 14 11:15:15.404: INFO: Created: latency-svc-kmbh7 May 14 11:15:15.418: INFO: Got endpoints: latency-svc-kmbh7 [1.069036633s] May 14 11:15:15.435: INFO: Created: latency-svc-qps9b May 14 11:15:15.448: INFO: Got endpoints: latency-svc-qps9b [1.098762589s] May 14 11:15:15.464: INFO: Created: latency-svc-pqxjj May 14 11:15:15.509: INFO: Got endpoints: latency-svc-pqxjj [1.159934119s] May 14 11:15:15.572: INFO: Created: latency-svc-tkxwv May 14 11:15:15.599: INFO: Got endpoints: latency-svc-tkxwv [1.127264829s] May 14 11:15:15.677: INFO: Created: latency-svc-rnw6x May 14 11:15:15.690: INFO: Got endpoints: latency-svc-rnw6x [1.176634325s] May 14 11:15:15.709: INFO: Created: latency-svc-55x82 May 14 11:15:15.720: INFO: Got endpoints: latency-svc-55x82 [1.001517541s] May 14 11:15:15.740: INFO: Created: latency-svc-c8nn6 May 14 11:15:15.755: INFO: Got endpoints: latency-svc-c8nn6 [909.516988ms] May 14 11:15:15.865: INFO: Created: latency-svc-j5hrv May 14 11:15:15.878: INFO: Got endpoints: latency-svc-j5hrv [888.112621ms] May 14 11:15:15.901: INFO: Created: latency-svc-njwf9 May 14 11:15:15.918: INFO: Got endpoints: latency-svc-njwf9 [898.382016ms] May 14 11:15:15.964: INFO: Created: latency-svc-4b2n9 May 14 11:15:15.986: INFO: Created: latency-svc-8d7gr May 14 11:15:15.986: INFO: Got endpoints: latency-svc-4b2n9 [878.903149ms] May 14 11:15:15.996: INFO: Got endpoints: latency-svc-8d7gr [824.678587ms] May 14 11:15:16.016: INFO: Created: latency-svc-n6dsj May 14 11:15:16.027: INFO: Got endpoints: latency-svc-n6dsj [786.482257ms] May 14 11:15:16.045: INFO: Created: latency-svc-h82wm May 14 11:15:16.064: INFO: Got endpoints: latency-svc-h82wm [793.57884ms] May 14 11:15:16.115: INFO: Created: latency-svc-swbfl May 14 11:15:16.124: INFO: Got endpoints: latency-svc-swbfl [823.559002ms] May 14 11:15:16.143: INFO: Created: latency-svc-5qwzh May 14 11:15:16.178: INFO: Got endpoints: latency-svc-5qwzh [794.34344ms] May 14 11:15:16.203: INFO: Created: latency-svc-29f6j May 14 11:15:16.253: INFO: Got endpoints: latency-svc-29f6j [834.597488ms] May 14 11:15:16.269: INFO: Created: latency-svc-sdbwl May 14 11:15:16.283: INFO: Got endpoints: latency-svc-sdbwl [835.204349ms] May 14 11:15:16.298: INFO: Created: latency-svc-rf47t May 14 11:15:16.323: INFO: Got endpoints: latency-svc-rf47t [813.285225ms] May 14 11:15:16.396: INFO: Created: latency-svc-6kbgk May 14 11:15:16.401: INFO: Got endpoints: latency-svc-6kbgk [801.62503ms] May 14 11:15:16.456: INFO: Created: latency-svc-jmjsw May 14 11:15:16.473: INFO: Got endpoints: latency-svc-jmjsw [783.26741ms] May 14 11:15:16.546: INFO: Created: latency-svc-pqp4w May 14 11:15:16.726: INFO: Got endpoints: latency-svc-pqp4w [1.005846936s] May 14 11:15:16.726: INFO: Created: latency-svc-nwghc May 14 11:15:16.762: INFO: Got endpoints: latency-svc-nwghc [1.006414109s] May 14 11:15:16.763: INFO: Created: latency-svc-mt9cm May 14 11:15:16.795: INFO: Got endpoints: latency-svc-mt9cm [917.385412ms] May 14 11:15:16.899: INFO: Created: latency-svc-d7427 May 14 11:15:16.915: INFO: Got endpoints: latency-svc-d7427 [996.964152ms] May 14 11:15:16.960: INFO: Created: latency-svc-2f4pv May 14 11:15:17.036: INFO: Got endpoints: latency-svc-2f4pv [1.049739787s] May 14 11:15:17.061: INFO: Created: latency-svc-j4cxw May 14 11:15:17.078: INFO: Got endpoints: latency-svc-j4cxw [1.081746513s] May 14 11:15:17.133: INFO: Created: latency-svc-6p62w May 14 11:15:17.174: INFO: Got endpoints: latency-svc-6p62w [1.147288531s] May 14 11:15:17.199: INFO: Created: latency-svc-zlqkf May 14 11:15:17.216: INFO: Got endpoints: latency-svc-zlqkf [1.152668739s] May 14 11:15:17.271: INFO: Created: latency-svc-kfcsh May 14 11:15:17.312: INFO: Got endpoints: latency-svc-kfcsh [1.18850648s] May 14 11:15:17.337: INFO: Created: latency-svc-jjm2m May 14 11:15:17.361: INFO: Got endpoints: latency-svc-jjm2m [1.182522495s] May 14 11:15:17.378: INFO: Created: latency-svc-t2gr8 May 14 11:15:17.492: INFO: Got endpoints: latency-svc-t2gr8 [1.238923635s] May 14 11:15:17.512: INFO: Created: latency-svc-rxx6b May 14 11:15:17.523: INFO: Got endpoints: latency-svc-rxx6b [1.240143393s] May 14 11:15:17.571: INFO: Created: latency-svc-lgbzs May 14 11:15:17.590: INFO: Got endpoints: latency-svc-lgbzs [1.267139497s] May 14 11:15:17.678: INFO: Created: latency-svc-mk529 May 14 11:15:17.727: INFO: Got endpoints: latency-svc-mk529 [1.326210233s] May 14 11:15:17.827: INFO: Created: latency-svc-8d9bn May 14 11:15:17.877: INFO: Got endpoints: latency-svc-8d9bn [1.403475066s] May 14 11:15:17.907: INFO: Created: latency-svc-xpqbk May 14 11:15:17.920: INFO: Got endpoints: latency-svc-xpqbk [1.194271903s] May 14 11:15:17.995: INFO: Created: latency-svc-qgpmx May 14 11:15:18.004: INFO: Got endpoints: latency-svc-qgpmx [1.242490126s] May 14 11:15:18.046: INFO: Created: latency-svc-zpsrm May 14 11:15:18.071: INFO: Got endpoints: latency-svc-zpsrm [1.275949706s] May 14 11:15:18.144: INFO: Created: latency-svc-p8rf9 May 14 11:15:18.155: INFO: Got endpoints: latency-svc-p8rf9 [1.239230947s] May 14 11:15:18.190: INFO: Created: latency-svc-cph5b May 14 11:15:18.203: INFO: Got endpoints: latency-svc-cph5b [1.167434144s] May 14 11:15:18.225: INFO: Created: latency-svc-4gknp May 14 11:15:18.306: INFO: Got endpoints: latency-svc-4gknp [1.228017849s] May 14 11:15:18.339: INFO: Created: latency-svc-bwfmj May 14 11:15:18.390: INFO: Got endpoints: latency-svc-bwfmj [1.215798715s] May 14 11:15:18.510: INFO: Created: latency-svc-hdq25 May 14 11:15:18.528: INFO: Got endpoints: latency-svc-hdq25 [1.311541312s] May 14 11:15:18.573: INFO: Created: latency-svc-4zxkj May 14 11:15:18.671: INFO: Got endpoints: latency-svc-4zxkj [1.359091737s] May 14 11:15:18.674: INFO: Created: latency-svc-w2lpm May 14 11:15:18.736: INFO: Got endpoints: latency-svc-w2lpm [1.374657353s] May 14 11:15:18.899: INFO: Created: latency-svc-jlp7z May 14 11:15:18.915: INFO: Got endpoints: latency-svc-jlp7z [1.423300021s] May 14 11:15:18.959: INFO: Created: latency-svc-ls96f May 14 11:15:18.969: INFO: Got endpoints: latency-svc-ls96f [1.445708349s] May 14 11:15:18.988: INFO: Created: latency-svc-p9fh6 May 14 11:15:19.071: INFO: Got endpoints: latency-svc-p9fh6 [1.481258389s] May 14 11:15:19.107: INFO: Created: latency-svc-kwxnq May 14 11:15:19.119: INFO: Got endpoints: latency-svc-kwxnq [1.392116601s] May 14 11:15:19.179: INFO: Created: latency-svc-kqf7n May 14 11:15:19.191: INFO: Got endpoints: latency-svc-kqf7n [1.314260641s] May 14 11:15:19.214: INFO: Created: latency-svc-wqg59 May 14 11:15:19.227: INFO: Got endpoints: latency-svc-wqg59 [1.307098484s] May 14 11:15:19.245: INFO: Created: latency-svc-rfdqv May 14 11:15:19.258: INFO: Got endpoints: latency-svc-rfdqv [1.253445323s] May 14 11:15:19.318: INFO: Created: latency-svc-54bqs May 14 11:15:19.322: INFO: Got endpoints: latency-svc-54bqs [1.251254249s] May 14 11:15:19.365: INFO: Created: latency-svc-zdgws May 14 11:15:19.382: INFO: Got endpoints: latency-svc-zdgws [1.227573066s] May 14 11:15:19.406: INFO: Created: latency-svc-jl6jd May 14 11:15:19.455: INFO: Got endpoints: latency-svc-jl6jd [1.251765507s] May 14 11:15:19.498: INFO: Created: latency-svc-48cmh May 14 11:15:19.511: INFO: Got endpoints: latency-svc-48cmh [1.205405906s] May 14 11:15:19.540: INFO: Created: latency-svc-46fn5 May 14 11:15:19.593: INFO: Got endpoints: latency-svc-46fn5 [1.203134316s] May 14 11:15:19.610: INFO: Created: latency-svc-54sqf May 14 11:15:19.632: INFO: Got endpoints: latency-svc-54sqf [1.103605878s] May 14 11:15:19.652: INFO: Created: latency-svc-j5cmx May 14 11:15:19.668: INFO: Got endpoints: latency-svc-j5cmx [996.345879ms] May 14 11:15:19.683: INFO: Created: latency-svc-q6frx May 14 11:15:19.731: INFO: Got endpoints: latency-svc-q6frx [995.509154ms] May 14 11:15:19.737: INFO: Created: latency-svc-hkmkh May 14 11:15:19.767: INFO: Got endpoints: latency-svc-hkmkh [852.098951ms] May 14 11:15:19.798: INFO: Created: latency-svc-zq92p May 14 11:15:19.807: INFO: Got endpoints: latency-svc-zq92p [838.267713ms] May 14 11:15:19.827: INFO: Created: latency-svc-d268j May 14 11:15:19.875: INFO: Got endpoints: latency-svc-d268j [803.546597ms] May 14 11:15:19.899: INFO: Created: latency-svc-zxcfl May 14 11:15:19.922: INFO: Got endpoints: latency-svc-zxcfl [802.0026ms] May 14 11:15:20.030: INFO: Created: latency-svc-xgmxb May 14 11:15:20.061: INFO: Created: latency-svc-vqhq6 May 14 11:15:20.062: INFO: Got endpoints: latency-svc-xgmxb [870.2385ms] May 14 11:15:20.097: INFO: Got endpoints: latency-svc-vqhq6 [869.323767ms] May 14 11:15:20.211: INFO: Created: latency-svc-qsn5f May 14 11:15:20.230: INFO: Got endpoints: latency-svc-qsn5f [971.508913ms] May 14 11:15:20.259: INFO: Created: latency-svc-t9nc2 May 14 11:15:20.271: INFO: Got endpoints: latency-svc-t9nc2 [948.415496ms] May 14 11:15:20.302: INFO: Created: latency-svc-hkksg May 14 11:15:20.384: INFO: Got endpoints: latency-svc-hkksg [1.001376728s] May 14 11:15:20.403: INFO: Created: latency-svc-p5rz7 May 14 11:15:20.415: INFO: Got endpoints: latency-svc-p5rz7 [959.613892ms] May 14 11:15:20.445: INFO: Created: latency-svc-vxhpr May 14 11:15:20.469: INFO: Got endpoints: latency-svc-vxhpr [958.191134ms] May 14 11:15:20.547: INFO: Created: latency-svc-87k4l May 14 11:15:20.572: INFO: Got endpoints: latency-svc-87k4l [978.529482ms] May 14 11:15:20.602: INFO: Created: latency-svc-27cm6 May 14 11:15:20.614: INFO: Got endpoints: latency-svc-27cm6 [981.932919ms] May 14 11:15:20.638: INFO: Created: latency-svc-xczdd May 14 11:15:20.695: INFO: Got endpoints: latency-svc-xczdd [1.02699062s] May 14 11:15:20.703: INFO: Created: latency-svc-ddgjd May 14 11:15:20.714: INFO: Got endpoints: latency-svc-ddgjd [982.949972ms] May 14 11:15:20.745: INFO: Created: latency-svc-q6fsh May 14 11:15:20.757: INFO: Got endpoints: latency-svc-q6fsh [990.026675ms] May 14 11:15:20.782: INFO: Created: latency-svc-lrxhw May 14 11:15:20.793: INFO: Got endpoints: latency-svc-lrxhw [985.765003ms] May 14 11:15:20.907: INFO: Created: latency-svc-kxxkc May 14 11:15:20.919: INFO: Got endpoints: latency-svc-kxxkc [1.044413886s] May 14 11:15:20.938: INFO: Created: latency-svc-hcm5l May 14 11:15:20.956: INFO: Got endpoints: latency-svc-hcm5l [1.034721672s] May 14 11:15:21.075: INFO: Created: latency-svc-xmt96 May 14 11:15:21.088: INFO: Got endpoints: latency-svc-xmt96 [1.026215569s] May 14 11:15:21.105: INFO: Created: latency-svc-958ss May 14 11:15:21.118: INFO: Got endpoints: latency-svc-958ss [1.021206564s] May 14 11:15:21.214: INFO: Created: latency-svc-jpbxg May 14 11:15:21.239: INFO: Got endpoints: latency-svc-jpbxg [1.009330476s] May 14 11:15:21.335: INFO: Created: latency-svc-fkq26 May 14 11:15:21.369: INFO: Got endpoints: latency-svc-fkq26 [1.098348894s] May 14 11:15:21.370: INFO: Created: latency-svc-kw88s May 14 11:15:21.395: INFO: Got endpoints: latency-svc-kw88s [1.011702857s] May 14 11:15:21.511: INFO: Created: latency-svc-zxxkj May 14 11:15:21.551: INFO: Got endpoints: latency-svc-zxxkj [1.136049591s] May 14 11:15:21.585: INFO: Created: latency-svc-85cbv May 14 11:15:21.671: INFO: Got endpoints: latency-svc-85cbv [1.201459317s] May 14 11:15:21.687: INFO: Created: latency-svc-8xjx2 May 14 11:15:21.702: INFO: Got endpoints: latency-svc-8xjx2 [1.130138146s] May 14 11:15:21.718: INFO: Created: latency-svc-hv8hp May 14 11:15:21.743: INFO: Got endpoints: latency-svc-hv8hp [1.128891222s] May 14 11:15:21.810: INFO: Created: latency-svc-5lrqr May 14 11:15:21.822: INFO: Got endpoints: latency-svc-5lrqr [1.126983635s] May 14 11:15:21.857: INFO: Created: latency-svc-mlm52 May 14 11:15:21.892: INFO: Got endpoints: latency-svc-mlm52 [1.177916475s] May 14 11:15:21.983: INFO: Created: latency-svc-qb7hv May 14 11:15:22.003: INFO: Got endpoints: latency-svc-qb7hv [1.246094907s] May 14 11:15:22.079: INFO: Created: latency-svc-rbgct May 14 11:15:22.117: INFO: Got endpoints: latency-svc-rbgct [1.323804346s] May 14 11:15:22.180: INFO: Created: latency-svc-4dhgd May 14 11:15:22.213: INFO: Got endpoints: latency-svc-4dhgd [1.294138373s] May 14 11:15:22.276: INFO: Created: latency-svc-jrc2v May 14 11:15:22.293: INFO: Got endpoints: latency-svc-jrc2v [1.336268605s] May 14 11:15:22.319: INFO: Created: latency-svc-vhnrm May 14 11:15:22.328: INFO: Got endpoints: latency-svc-vhnrm [1.239897157s] May 14 11:15:22.343: INFO: Created: latency-svc-fdbz5 May 14 11:15:22.352: INFO: Got endpoints: latency-svc-fdbz5 [1.234223042s] May 14 11:15:22.391: INFO: Created: latency-svc-t8qqv May 14 11:15:22.402: INFO: Got endpoints: latency-svc-t8qqv [1.163119691s] May 14 11:15:22.463: INFO: Created: latency-svc-56sdz May 14 11:15:22.479: INFO: Got endpoints: latency-svc-56sdz [1.109397058s] May 14 11:15:22.539: INFO: Created: latency-svc-rfw57 May 14 11:15:22.551: INFO: Got endpoints: latency-svc-rfw57 [1.155230424s] May 14 11:15:22.572: INFO: Created: latency-svc-nnsvw May 14 11:15:22.581: INFO: Got endpoints: latency-svc-nnsvw [1.030266583s] May 14 11:15:22.601: INFO: Created: latency-svc-mwj5r May 14 11:15:22.618: INFO: Got endpoints: latency-svc-mwj5r [946.831217ms] May 14 11:15:22.695: INFO: Created: latency-svc-q8x8b May 14 11:15:22.700: INFO: Got endpoints: latency-svc-q8x8b [998.126171ms] May 14 11:15:22.763: INFO: Created: latency-svc-j5gjb May 14 11:15:22.775: INFO: Got endpoints: latency-svc-j5gjb [1.031921249s] May 14 11:15:22.839: INFO: Created: latency-svc-pw6w5 May 14 11:15:22.847: INFO: Got endpoints: latency-svc-pw6w5 [1.025144682s] May 14 11:15:22.882: INFO: Created: latency-svc-jkgvl May 14 11:15:22.907: INFO: Got endpoints: latency-svc-jkgvl [1.014808198s] May 14 11:15:22.936: INFO: Created: latency-svc-h6zms May 14 11:15:22.983: INFO: Got endpoints: latency-svc-h6zms [979.399338ms] May 14 11:15:22.986: INFO: Created: latency-svc-4t7mf May 14 11:15:23.023: INFO: Got endpoints: latency-svc-4t7mf [906.204467ms] May 14 11:15:23.050: INFO: Created: latency-svc-c6l5z May 14 11:15:23.064: INFO: Got endpoints: latency-svc-c6l5z [850.781985ms] May 14 11:15:23.128: INFO: Created: latency-svc-qhtjf May 14 11:15:23.159: INFO: Got endpoints: latency-svc-qhtjf [866.423347ms] May 14 11:15:23.195: INFO: Created: latency-svc-kgknq May 14 11:15:23.210: INFO: Got endpoints: latency-svc-kgknq [881.707292ms] May 14 11:15:23.258: INFO: Created: latency-svc-r9xww May 14 11:15:23.271: INFO: Got endpoints: latency-svc-r9xww [918.904319ms] May 14 11:15:23.297: INFO: Created: latency-svc-69sm5 May 14 11:15:23.306: INFO: Got endpoints: latency-svc-69sm5 [903.41476ms] May 14 11:15:23.325: INFO: Created: latency-svc-f8z2r May 14 11:15:23.345: INFO: Got endpoints: latency-svc-f8z2r [866.364419ms] May 14 11:15:23.403: INFO: Created: latency-svc-mth6s May 14 11:15:23.406: INFO: Got endpoints: latency-svc-mth6s [855.157194ms] May 14 11:15:23.434: INFO: Created: latency-svc-bmrkc May 14 11:15:23.452: INFO: Got endpoints: latency-svc-bmrkc [870.664381ms] May 14 11:15:23.476: INFO: Created: latency-svc-d99kb May 14 11:15:23.501: INFO: Got endpoints: latency-svc-d99kb [882.931086ms] May 14 11:15:23.563: INFO: Created: latency-svc-7jlgc May 14 11:15:23.570: INFO: Got endpoints: latency-svc-7jlgc [870.113005ms] May 14 11:15:23.609: INFO: Created: latency-svc-zwcrp May 14 11:15:23.656: INFO: Got endpoints: latency-svc-zwcrp [881.83833ms] May 14 11:15:23.707: INFO: Created: latency-svc-kl2xs May 14 11:15:23.710: INFO: Got endpoints: latency-svc-kl2xs [862.650978ms] May 14 11:15:23.740: INFO: Created: latency-svc-7bfv9 May 14 11:15:23.764: INFO: Got endpoints: latency-svc-7bfv9 [856.517416ms] May 14 11:15:23.789: INFO: Created: latency-svc-mm9mg May 14 11:15:23.800: INFO: Got endpoints: latency-svc-mm9mg [816.855265ms] May 14 11:15:23.850: INFO: Created: latency-svc-fcqrr May 14 11:15:23.872: INFO: Got endpoints: latency-svc-fcqrr [848.468604ms] May 14 11:15:23.896: INFO: Created: latency-svc-rskkg May 14 11:15:23.909: INFO: Got endpoints: latency-svc-rskkg [845.010712ms] May 14 11:15:23.926: INFO: Created: latency-svc-kpbvl May 14 11:15:24.007: INFO: Got endpoints: latency-svc-kpbvl [847.52284ms] May 14 11:15:24.012: INFO: Created: latency-svc-pxp6f May 14 11:15:24.029: INFO: Got endpoints: latency-svc-pxp6f [819.69926ms] May 14 11:15:24.076: INFO: Created: latency-svc-2gqk2 May 14 11:15:24.102: INFO: Got endpoints: latency-svc-2gqk2 [830.643883ms] May 14 11:15:24.175: INFO: Created: latency-svc-q95lt May 14 11:15:24.197: INFO: Got endpoints: latency-svc-q95lt [890.990947ms] May 14 11:15:24.238: INFO: Created: latency-svc-nhlr9 May 14 11:15:24.253: INFO: Got endpoints: latency-svc-nhlr9 [907.538082ms] May 14 11:15:24.368: INFO: Created: latency-svc-whvrc May 14 11:15:24.372: INFO: Got endpoints: latency-svc-whvrc [965.782218ms] May 14 11:15:24.400: INFO: Created: latency-svc-67s8d May 14 11:15:24.414: INFO: Got endpoints: latency-svc-67s8d [962.245109ms] May 14 11:15:24.433: INFO: Created: latency-svc-sl2tv May 14 11:15:24.463: INFO: Got endpoints: latency-svc-sl2tv [961.941504ms] May 14 11:15:24.527: INFO: Created: latency-svc-vkdcc May 14 11:15:24.535: INFO: Got endpoints: latency-svc-vkdcc [964.378387ms] May 14 11:15:24.563: INFO: Created: latency-svc-rkqw6 May 14 11:15:24.589: INFO: Got endpoints: latency-svc-rkqw6 [932.480096ms] May 14 11:15:24.622: INFO: Created: latency-svc-s766d May 14 11:15:24.677: INFO: Got endpoints: latency-svc-s766d [967.438428ms] May 14 11:15:24.695: INFO: Created: latency-svc-56gl6 May 14 11:15:24.751: INFO: Got endpoints: latency-svc-56gl6 [987.386645ms] May 14 11:15:24.868: INFO: Created: latency-svc-2vj7t May 14 11:15:24.871: INFO: Got endpoints: latency-svc-2vj7t [1.07147904s] May 14 11:15:25.032: INFO: Created: latency-svc-7ftgw May 14 11:15:25.069: INFO: Got endpoints: latency-svc-7ftgw [1.19756064s] May 14 11:15:25.233: INFO: Created: latency-svc-k5w68 May 14 11:15:25.260: INFO: Got endpoints: latency-svc-k5w68 [1.350539895s] May 14 11:15:25.312: INFO: Created: latency-svc-zk7rl May 14 11:15:25.390: INFO: Got endpoints: latency-svc-zk7rl [1.383373585s] May 14 11:15:25.416: INFO: Created: latency-svc-wdn9m May 14 11:15:25.452: INFO: Got endpoints: latency-svc-wdn9m [1.422455932s] May 14 11:15:25.488: INFO: Created: latency-svc-74v7c May 14 11:15:25.599: INFO: Got endpoints: latency-svc-74v7c [1.496814435s] May 14 11:15:25.650: INFO: Created: latency-svc-z8xqp May 14 11:15:25.676: INFO: Got endpoints: latency-svc-z8xqp [1.479666971s] May 14 11:15:25.791: INFO: Created: latency-svc-kh59d May 14 11:15:25.830: INFO: Created: latency-svc-knllf May 14 11:15:25.831: INFO: Got endpoints: latency-svc-kh59d [1.578169518s] May 14 11:15:25.872: INFO: Got endpoints: latency-svc-knllf [1.500141543s] May 14 11:15:25.946: INFO: Created: latency-svc-7f5j2 May 14 11:15:25.983: INFO: Got endpoints: latency-svc-7f5j2 [1.568756327s] May 14 11:15:26.040: INFO: Created: latency-svc-cbwfs May 14 11:15:26.147: INFO: Got endpoints: latency-svc-cbwfs [1.684282275s] May 14 11:15:26.197: INFO: Created: latency-svc-nwdg8 May 14 11:15:26.212: INFO: Got endpoints: latency-svc-nwdg8 [1.677179676s] May 14 11:15:26.282: INFO: Created: latency-svc-r8zn6 May 14 11:15:26.304: INFO: Got endpoints: latency-svc-r8zn6 [1.714569907s] May 14 11:15:26.346: INFO: Created: latency-svc-929wz May 14 11:15:26.368: INFO: Got endpoints: latency-svc-929wz [1.691304716s] May 14 11:15:26.425: INFO: Created: latency-svc-fr949 May 14 11:15:26.454: INFO: Got endpoints: latency-svc-fr949 [1.703246072s] May 14 11:15:26.457: INFO: Created: latency-svc-mgzzc May 14 11:15:26.478: INFO: Got endpoints: latency-svc-mgzzc [1.606528014s] May 14 11:15:26.519: INFO: Created: latency-svc-djrwr May 14 11:15:26.564: INFO: Got endpoints: latency-svc-djrwr [1.494799828s] May 14 11:15:26.574: INFO: Created: latency-svc-l6g47 May 14 11:15:26.591: INFO: Got endpoints: latency-svc-l6g47 [1.330952429s] May 14 11:15:26.628: INFO: Created: latency-svc-2z6zj May 14 11:15:26.650: INFO: Got endpoints: latency-svc-2z6zj [1.259818715s] May 14 11:15:26.663: INFO: Created: latency-svc-ps9mp May 14 11:15:26.710: INFO: Got endpoints: latency-svc-ps9mp [1.258636063s] May 14 11:15:26.731: INFO: Created: latency-svc-pjvzv May 14 11:15:26.754: INFO: Got endpoints: latency-svc-pjvzv [1.155480979s] May 14 11:15:26.790: INFO: Created: latency-svc-2hc9q May 14 11:15:26.802: INFO: Got endpoints: latency-svc-2hc9q [1.126130169s] May 14 11:15:26.881: INFO: Created: latency-svc-fnr28 May 14 11:15:26.892: INFO: Got endpoints: latency-svc-fnr28 [1.061058217s] May 14 11:15:26.916: INFO: Created: latency-svc-ps9mt May 14 11:15:26.928: INFO: Got endpoints: latency-svc-ps9mt [1.056475398s] May 14 11:15:26.957: INFO: Created: latency-svc-qwvhs May 14 11:15:26.971: INFO: Got endpoints: latency-svc-qwvhs [987.765838ms] May 14 11:15:27.018: INFO: Created: latency-svc-xzwmc May 14 11:15:27.048: INFO: Got endpoints: latency-svc-xzwmc [901.300785ms] May 14 11:15:27.049: INFO: Created: latency-svc-pq5mv May 14 11:15:27.062: INFO: Got endpoints: latency-svc-pq5mv [850.080646ms] May 14 11:15:27.095: INFO: Created: latency-svc-mpscv May 14 11:15:27.116: INFO: Got endpoints: latency-svc-mpscv [812.161116ms] May 14 11:15:27.168: INFO: Created: latency-svc-lb46w May 14 11:15:27.177: INFO: Got endpoints: latency-svc-lb46w [808.676521ms] May 14 11:15:27.199: INFO: Created: latency-svc-cxfsg May 14 11:15:27.212: INFO: Got endpoints: latency-svc-cxfsg [757.687319ms] May 14 11:15:27.241: INFO: Created: latency-svc-sqtbl May 14 11:15:27.262: INFO: Got endpoints: latency-svc-sqtbl [783.857717ms] May 14 11:15:27.311: INFO: Created: latency-svc-s82w7 May 14 11:15:27.333: INFO: Got endpoints: latency-svc-s82w7 [769.225064ms] May 14 11:15:27.370: INFO: Created: latency-svc-csnbz May 14 11:15:27.375: INFO: Got endpoints: latency-svc-csnbz [784.217315ms] May 14 11:15:27.390: INFO: Created: latency-svc-slcb9 May 14 11:15:27.407: INFO: Got endpoints: latency-svc-slcb9 [756.555719ms] May 14 11:15:27.444: INFO: Created: latency-svc-fxb8c May 14 11:15:27.461: INFO: Got endpoints: latency-svc-fxb8c [750.842239ms] May 14 11:15:27.486: INFO: Created: latency-svc-nfv9c May 14 11:15:27.496: INFO: Got endpoints: latency-svc-nfv9c [741.726003ms] May 14 11:15:27.516: INFO: Created: latency-svc-xptr8 May 14 11:15:27.542: INFO: Got endpoints: latency-svc-xptr8 [739.220605ms] May 14 11:15:27.613: INFO: Created: latency-svc-rndcj May 14 11:15:27.614: INFO: Got endpoints: latency-svc-rndcj [722.177722ms] May 14 11:15:27.704: INFO: Created: latency-svc-mtvxc May 14 11:15:27.791: INFO: Got endpoints: latency-svc-mtvxc [862.191119ms] May 14 11:15:27.811: INFO: Created: latency-svc-f49wf May 14 11:15:27.821: INFO: Got endpoints: latency-svc-f49wf [850.0521ms] May 14 11:15:27.839: INFO: Created: latency-svc-bwr79 May 14 11:15:27.971: INFO: Got endpoints: latency-svc-bwr79 [922.016145ms] May 14 11:15:27.972: INFO: Created: latency-svc-b25c4 May 14 11:15:27.990: INFO: Got endpoints: latency-svc-b25c4 [927.668201ms] May 14 11:15:28.008: INFO: Created: latency-svc-npjlq May 14 11:15:28.032: INFO: Got endpoints: latency-svc-npjlq [916.626943ms] May 14 11:15:28.123: INFO: Created: latency-svc-vxx25 May 14 11:15:28.152: INFO: Got endpoints: latency-svc-vxx25 [975.008587ms] May 14 11:15:28.213: INFO: Created: latency-svc-nvzw5 May 14 11:15:28.249: INFO: Got endpoints: latency-svc-nvzw5 [1.036325369s] May 14 11:15:28.273: INFO: Created: latency-svc-859rk May 14 11:15:28.296: INFO: Got endpoints: latency-svc-859rk [1.034251453s] May 14 11:15:28.320: INFO: Created: latency-svc-g7lfv May 14 11:15:28.333: INFO: Got endpoints: latency-svc-g7lfv [999.632835ms] May 14 11:15:28.384: INFO: Created: latency-svc-ml25n May 14 11:15:28.387: INFO: Got endpoints: latency-svc-ml25n [1.011869877s] May 14 11:15:28.411: INFO: Created: latency-svc-vn6wl May 14 11:15:28.435: INFO: Got endpoints: latency-svc-vn6wl [1.028597261s] May 14 11:15:28.458: INFO: Created: latency-svc-sgp5f May 14 11:15:28.472: INFO: Got endpoints: latency-svc-sgp5f [1.010193466s] May 14 11:15:28.528: INFO: Created: latency-svc-6hvb4 May 14 11:15:28.542: INFO: Got endpoints: latency-svc-6hvb4 [1.045841939s] May 14 11:15:28.574: INFO: Created: latency-svc-pgbd6 May 14 11:15:28.593: INFO: Got endpoints: latency-svc-pgbd6 [1.051445684s] May 14 11:15:28.627: INFO: Created: latency-svc-6wsj4 May 14 11:15:28.685: INFO: Got endpoints: latency-svc-6wsj4 [1.071124382s] May 14 11:15:28.686: INFO: Latencies: [123.098476ms 164.457263ms 369.403462ms 496.919182ms 640.522666ms 670.778458ms 722.177722ms 739.220605ms 741.726003ms 750.842239ms 756.555719ms 757.687319ms 757.990417ms 769.225064ms 783.26741ms 783.857717ms 784.217315ms 786.482257ms 793.57884ms 794.34344ms 801.62503ms 802.0026ms 803.546597ms 808.676521ms 812.161116ms 813.285225ms 816.855265ms 819.69926ms 821.991913ms 823.559002ms 824.678587ms 830.643883ms 834.597488ms 835.204349ms 838.267713ms 845.010712ms 847.52284ms 848.468604ms 850.0521ms 850.080646ms 850.781985ms 852.098951ms 855.157194ms 856.517416ms 862.191119ms 862.650978ms 866.364419ms 866.423347ms 869.323767ms 870.113005ms 870.2385ms 870.664381ms 878.903149ms 881.707292ms 881.83833ms 882.931086ms 888.112621ms 890.990947ms 891.185951ms 898.382016ms 901.300785ms 903.41476ms 906.204467ms 907.538082ms 909.516988ms 916.626943ms 917.385412ms 918.904319ms 920.834516ms 922.016145ms 927.668201ms 932.480096ms 946.831217ms 948.415496ms 950.742255ms 958.191134ms 959.613892ms 961.941504ms 962.245109ms 964.378387ms 965.782218ms 967.438428ms 971.508913ms 975.008587ms 978.529482ms 979.399338ms 981.932919ms 982.949972ms 985.765003ms 987.386645ms 987.765838ms 990.026675ms 995.509154ms 996.345879ms 996.964152ms 998.126171ms 999.632835ms 1.001376728s 1.001517541s 1.005846936s 1.006414109s 1.009330476s 1.010193466s 1.011702857s 1.011869877s 1.014808198s 1.021206564s 1.025144682s 1.026215569s 1.02699062s 1.028597261s 1.030266583s 1.031921249s 1.034251453s 1.034721672s 1.034763411s 1.036325369s 1.044413886s 1.045841939s 1.049739787s 1.051445684s 1.056475398s 1.061058217s 1.069036633s 1.071124382s 1.07147904s 1.081746513s 1.098348894s 1.098762589s 1.103605878s 1.109397058s 1.126130169s 1.126983635s 1.127264829s 1.128891222s 1.130138146s 1.136049591s 1.147288531s 1.152668739s 1.155230424s 1.155480979s 1.159934119s 1.163119691s 1.167434144s 1.176634325s 1.177916475s 1.182522495s 1.18850648s 1.194271903s 1.19756064s 1.201459317s 1.203134316s 1.205405906s 1.215798715s 1.227573066s 1.228017849s 1.234223042s 1.238923635s 1.239230947s 1.239897157s 1.240143393s 1.242490126s 1.246094907s 1.251254249s 1.251765507s 1.253445323s 1.258636063s 1.259818715s 1.267139497s 1.275949706s 1.294138373s 1.307098484s 1.311541312s 1.314260641s 1.323804346s 1.326210233s 1.330952429s 1.336268605s 1.350539895s 1.359091737s 1.374657353s 1.383373585s 1.392116601s 1.403475066s 1.422455932s 1.423300021s 1.445708349s 1.479666971s 1.481258389s 1.494799828s 1.496814435s 1.500141543s 1.568756327s 1.578169518s 1.606528014s 1.677179676s 1.684282275s 1.691304716s 1.703246072s 1.714569907s] May 14 11:15:28.686: INFO: 50 %ile: 1.006414109s May 14 11:15:28.686: INFO: 90 %ile: 1.374657353s May 14 11:15:28.686: INFO: 99 %ile: 1.703246072s May 14 11:15:28.686: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:15:28.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-8385" for this suite. • [SLOW TEST:19.970 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":275,"completed":61,"skipped":843,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:15:28.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 14 11:15:28.872: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1274 /api/v1/namespaces/watch-1274/configmaps/e2e-watch-test-watch-closed 1e385d52-1118-4b12-8fa4-3b5baf45d536 4271375 0 2020-05-14 11:15:28 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-14 11:15:28 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 14 11:15:28.872: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1274 /api/v1/namespaces/watch-1274/configmaps/e2e-watch-test-watch-closed 1e385d52-1118-4b12-8fa4-3b5baf45d536 4271376 0 2020-05-14 11:15:28 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-14 11:15:28 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 14 11:15:28.915: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1274 /api/v1/namespaces/watch-1274/configmaps/e2e-watch-test-watch-closed 1e385d52-1118-4b12-8fa4-3b5baf45d536 4271377 0 2020-05-14 11:15:28 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-14 11:15:28 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 14 11:15:28.915: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1274 /api/v1/namespaces/watch-1274/configmaps/e2e-watch-test-watch-closed 1e385d52-1118-4b12-8fa4-3b5baf45d536 4271378 0 2020-05-14 11:15:28 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-14 11:15:28 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:15:28.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1274" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":62,"skipped":862,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:15:28.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-e3ff43ed-f00b-444c-9134-6f9e39a7e10c STEP: Creating a pod to test consume secrets May 14 11:15:29.099: INFO: Waiting up to 5m0s for pod "pod-secrets-5ff96007-85e7-4250-b4f4-fc246eaeece1" in namespace "secrets-4266" to be "Succeeded or Failed" May 14 11:15:29.108: INFO: Pod "pod-secrets-5ff96007-85e7-4250-b4f4-fc246eaeece1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.494855ms May 14 11:15:31.112: INFO: Pod "pod-secrets-5ff96007-85e7-4250-b4f4-fc246eaeece1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012187088s May 14 11:15:33.116: INFO: Pod "pod-secrets-5ff96007-85e7-4250-b4f4-fc246eaeece1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016528411s May 14 11:15:35.150: INFO: Pod "pod-secrets-5ff96007-85e7-4250-b4f4-fc246eaeece1": Phase="Running", Reason="", readiness=true. Elapsed: 6.0504801s May 14 11:15:37.185: INFO: Pod "pod-secrets-5ff96007-85e7-4250-b4f4-fc246eaeece1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.086114071s STEP: Saw pod success May 14 11:15:37.186: INFO: Pod "pod-secrets-5ff96007-85e7-4250-b4f4-fc246eaeece1" satisfied condition "Succeeded or Failed" May 14 11:15:37.191: INFO: Trying to get logs from node kali-worker pod pod-secrets-5ff96007-85e7-4250-b4f4-fc246eaeece1 container secret-volume-test: STEP: delete the pod May 14 11:15:37.231: INFO: Waiting for pod pod-secrets-5ff96007-85e7-4250-b4f4-fc246eaeece1 to disappear May 14 11:15:37.251: INFO: Pod pod-secrets-5ff96007-85e7-4250-b4f4-fc246eaeece1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:15:37.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4266" for this suite. • [SLOW TEST:8.299 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":63,"skipped":876,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:15:37.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:15:37.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5292" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":64,"skipped":910,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:15:37.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 14 11:15:37.570: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6cbe93c6-5665-4c68-a68d-a657bdb309b1" in namespace "projected-8359" to be "Succeeded or Failed" May 14 11:15:37.575: INFO: Pod "downwardapi-volume-6cbe93c6-5665-4c68-a68d-a657bdb309b1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.300838ms May 14 11:15:39.593: INFO: Pod "downwardapi-volume-6cbe93c6-5665-4c68-a68d-a657bdb309b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023182474s May 14 11:15:41.601: INFO: Pod "downwardapi-volume-6cbe93c6-5665-4c68-a68d-a657bdb309b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03127937s STEP: Saw pod success May 14 11:15:41.601: INFO: Pod "downwardapi-volume-6cbe93c6-5665-4c68-a68d-a657bdb309b1" satisfied condition "Succeeded or Failed" May 14 11:15:41.630: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-6cbe93c6-5665-4c68-a68d-a657bdb309b1 container client-container: STEP: delete the pod May 14 11:15:41.762: INFO: Waiting for pod downwardapi-volume-6cbe93c6-5665-4c68-a68d-a657bdb309b1 to disappear May 14 11:15:41.875: INFO: Pod downwardapi-volume-6cbe93c6-5665-4c68-a68d-a657bdb309b1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:15:41.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8359" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":65,"skipped":918,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:15:42.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 14 11:15:42.294: INFO: The status of Pod test-webserver-08576a62-233e-49bc-85dc-232a8807c628 is Pending, waiting for it to be Running (with Ready = true) May 14 11:15:44.414: INFO: The status of Pod test-webserver-08576a62-233e-49bc-85dc-232a8807c628 is Pending, waiting for it to be Running (with Ready = true) May 14 11:15:46.315: INFO: The status of Pod test-webserver-08576a62-233e-49bc-85dc-232a8807c628 is Running (Ready = false) May 14 11:15:48.298: INFO: The status of Pod test-webserver-08576a62-233e-49bc-85dc-232a8807c628 is Running (Ready = false) May 14 11:15:50.443: INFO: The status of Pod test-webserver-08576a62-233e-49bc-85dc-232a8807c628 is Running (Ready = false) May 14 11:15:52.394: INFO: The status of Pod test-webserver-08576a62-233e-49bc-85dc-232a8807c628 is Running (Ready = false) May 14 11:15:54.328: INFO: The status of Pod test-webserver-08576a62-233e-49bc-85dc-232a8807c628 is Running (Ready = false) May 14 11:15:56.299: INFO: The status of Pod test-webserver-08576a62-233e-49bc-85dc-232a8807c628 is Running (Ready = false) May 14 11:15:58.323: INFO: The status of Pod test-webserver-08576a62-233e-49bc-85dc-232a8807c628 is Running (Ready = false) May 14 11:16:00.546: INFO: The status of Pod test-webserver-08576a62-233e-49bc-85dc-232a8807c628 is Running (Ready = false) May 14 11:16:02.299: INFO: The status of Pod test-webserver-08576a62-233e-49bc-85dc-232a8807c628 is Running (Ready = false) May 14 11:16:04.299: INFO: The status of Pod test-webserver-08576a62-233e-49bc-85dc-232a8807c628 is Running (Ready = false) May 14 11:16:06.298: INFO: The status of Pod test-webserver-08576a62-233e-49bc-85dc-232a8807c628 is Running (Ready = false) May 14 11:16:08.298: INFO: The status of Pod test-webserver-08576a62-233e-49bc-85dc-232a8807c628 is Running (Ready = true) May 14 11:16:08.299: INFO: Container started at 2020-05-14 11:15:45 +0000 UTC, pod became ready at 2020-05-14 11:16:07 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:16:08.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4176" for this suite. • [SLOW TEST:26.176 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":66,"skipped":962,"failed":0} SSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:16:08.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name secret-emptykey-test-dc451673-4f0d-427e-b4f4-03d7478dee28 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:16:08.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8653" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":67,"skipped":965,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:16:08.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-66904f57-271c-4636-8965-2b5ae6cec75d STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:16:23.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7908" for this suite. • [SLOW TEST:14.178 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":68,"skipped":968,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:16:23.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod May 14 11:16:27.662: INFO: Successfully updated pod "labelsupdate232eba25-344a-45b6-82a3-3e2032561c4c" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:16:29.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1702" for this suite. • [SLOW TEST:6.662 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":69,"skipped":979,"failed":0} [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:16:29.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-9553 STEP: creating a selector STEP: Creating the service pods in kubernetes May 14 11:16:30.141: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 14 11:16:31.134: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 14 11:16:33.168: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 14 11:16:35.965: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 14 11:16:38.222: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 14 11:16:39.377: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 14 11:16:41.365: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 14 11:16:43.138: INFO: The status of Pod netserver-0 is Running (Ready = false) May 14 11:16:45.138: INFO: The status of Pod netserver-0 is Running (Ready = false) May 14 11:16:47.138: INFO: The status of Pod netserver-0 is Running (Ready = false) May 14 11:16:49.137: INFO: The status of Pod netserver-0 is Running (Ready = false) May 14 11:16:51.138: INFO: The status of Pod netserver-0 is Running (Ready = false) May 14 11:16:53.137: INFO: The status of Pod netserver-0 is Running (Ready = false) May 14 11:16:55.216: INFO: The status of Pod netserver-0 is Running (Ready = false) May 14 11:16:57.195: INFO: The status of Pod netserver-0 is Running (Ready = true) May 14 11:16:57.330: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 14 11:17:12.820: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.117 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9553 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 11:17:12.820: INFO: >>> kubeConfig: /root/.kube/config I0514 11:17:12.853505 7 log.go:172] (0xc002fe2000) (0xc0010d6780) Create stream I0514 11:17:12.853532 7 log.go:172] (0xc002fe2000) (0xc0010d6780) Stream added, broadcasting: 1 I0514 11:17:12.854732 7 log.go:172] (0xc002fe2000) Reply frame received for 1 I0514 11:17:12.854752 7 log.go:172] (0xc002fe2000) (0xc0010d6820) Create stream I0514 11:17:12.854759 7 log.go:172] (0xc002fe2000) (0xc0010d6820) Stream added, broadcasting: 3 I0514 11:17:12.855355 7 log.go:172] (0xc002fe2000) Reply frame received for 3 I0514 11:17:12.855379 7 log.go:172] (0xc002fe2000) (0xc002a53400) Create stream I0514 11:17:12.855388 7 log.go:172] (0xc002fe2000) (0xc002a53400) Stream added, broadcasting: 5 I0514 11:17:12.855977 7 log.go:172] (0xc002fe2000) Reply frame received for 5 I0514 11:17:14.014671 7 log.go:172] (0xc002fe2000) Data frame received for 3 I0514 11:17:14.014699 7 log.go:172] (0xc0010d6820) (3) Data frame handling I0514 11:17:14.014735 7 log.go:172] (0xc002fe2000) Data frame received for 5 I0514 11:17:14.014763 7 log.go:172] (0xc002a53400) (5) Data frame handling I0514 11:17:14.014788 7 log.go:172] (0xc0010d6820) (3) Data frame sent I0514 11:17:14.014806 7 log.go:172] (0xc002fe2000) Data frame received for 3 I0514 11:17:14.014826 7 log.go:172] (0xc0010d6820) (3) Data frame handling I0514 11:17:14.016186 7 log.go:172] (0xc002fe2000) Data frame received for 1 I0514 11:17:14.016209 7 log.go:172] (0xc0010d6780) (1) Data frame handling I0514 11:17:14.016223 7 log.go:172] (0xc0010d6780) (1) Data frame sent I0514 11:17:14.016249 7 log.go:172] (0xc002fe2000) (0xc0010d6780) Stream removed, broadcasting: 1 I0514 11:17:14.016271 7 log.go:172] (0xc002fe2000) Go away received I0514 11:17:14.016586 7 log.go:172] (0xc002fe2000) (0xc0010d6780) Stream removed, broadcasting: 1 I0514 11:17:14.016603 7 log.go:172] (0xc002fe2000) (0xc0010d6820) Stream removed, broadcasting: 3 I0514 11:17:14.016614 7 log.go:172] (0xc002fe2000) (0xc002a53400) Stream removed, broadcasting: 5 May 14 11:17:14.016: INFO: Found all expected endpoints: [netserver-0] May 14 11:17:14.019: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.241 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9553 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 11:17:14.019: INFO: >>> kubeConfig: /root/.kube/config I0514 11:17:14.049261 7 log.go:172] (0xc002fe2580) (0xc0010d6d20) Create stream I0514 11:17:14.049289 7 log.go:172] (0xc002fe2580) (0xc0010d6d20) Stream added, broadcasting: 1 I0514 11:17:14.050349 7 log.go:172] (0xc002fe2580) Reply frame received for 1 I0514 11:17:14.050378 7 log.go:172] (0xc002fe2580) (0xc0011ec000) Create stream I0514 11:17:14.050386 7 log.go:172] (0xc002fe2580) (0xc0011ec000) Stream added, broadcasting: 3 I0514 11:17:14.051181 7 log.go:172] (0xc002fe2580) Reply frame received for 3 I0514 11:17:14.051201 7 log.go:172] (0xc002fe2580) (0xc0010d6f00) Create stream I0514 11:17:14.051207 7 log.go:172] (0xc002fe2580) (0xc0010d6f00) Stream added, broadcasting: 5 I0514 11:17:14.051733 7 log.go:172] (0xc002fe2580) Reply frame received for 5 I0514 11:17:15.122827 7 log.go:172] (0xc002fe2580) Data frame received for 3 I0514 11:17:15.122849 7 log.go:172] (0xc0011ec000) (3) Data frame handling I0514 11:17:15.122859 7 log.go:172] (0xc0011ec000) (3) Data frame sent I0514 11:17:15.123121 7 log.go:172] (0xc002fe2580) Data frame received for 5 I0514 11:17:15.123147 7 log.go:172] (0xc0010d6f00) (5) Data frame handling I0514 11:17:15.123313 7 log.go:172] (0xc002fe2580) Data frame received for 3 I0514 11:17:15.123338 7 log.go:172] (0xc0011ec000) (3) Data frame handling I0514 11:17:15.125074 7 log.go:172] (0xc002fe2580) Data frame received for 1 I0514 11:17:15.125096 7 log.go:172] (0xc0010d6d20) (1) Data frame handling I0514 11:17:15.125240 7 log.go:172] (0xc0010d6d20) (1) Data frame sent I0514 11:17:15.125475 7 log.go:172] (0xc002fe2580) (0xc0010d6d20) Stream removed, broadcasting: 1 I0514 11:17:15.125554 7 log.go:172] (0xc002fe2580) (0xc0010d6d20) Stream removed, broadcasting: 1 I0514 11:17:15.125576 7 log.go:172] (0xc002fe2580) (0xc0011ec000) Stream removed, broadcasting: 3 I0514 11:17:15.125635 7 log.go:172] (0xc002fe2580) Go away received I0514 11:17:15.125784 7 log.go:172] (0xc002fe2580) (0xc0010d6f00) Stream removed, broadcasting: 5 May 14 11:17:15.125: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:17:15.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9553" for this suite. • [SLOW TEST:45.438 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":70,"skipped":979,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:17:15.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on node default medium May 14 11:17:15.979: INFO: Waiting up to 5m0s for pod "pod-5e469dfc-8287-40bf-bcb5-dbeb9e0123a0" in namespace "emptydir-378" to be "Succeeded or Failed" May 14 11:17:16.875: INFO: Pod "pod-5e469dfc-8287-40bf-bcb5-dbeb9e0123a0": Phase="Pending", Reason="", readiness=false. Elapsed: 896.246535ms May 14 11:17:18.879: INFO: Pod "pod-5e469dfc-8287-40bf-bcb5-dbeb9e0123a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.900542562s May 14 11:17:22.353: INFO: Pod "pod-5e469dfc-8287-40bf-bcb5-dbeb9e0123a0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.374709985s May 14 11:17:24.374: INFO: Pod "pod-5e469dfc-8287-40bf-bcb5-dbeb9e0123a0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.395221216s May 14 11:17:26.515: INFO: Pod "pod-5e469dfc-8287-40bf-bcb5-dbeb9e0123a0": Phase="Running", Reason="", readiness=true. Elapsed: 10.536476823s May 14 11:17:28.527: INFO: Pod "pod-5e469dfc-8287-40bf-bcb5-dbeb9e0123a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.54831292s STEP: Saw pod success May 14 11:17:28.527: INFO: Pod "pod-5e469dfc-8287-40bf-bcb5-dbeb9e0123a0" satisfied condition "Succeeded or Failed" May 14 11:17:28.530: INFO: Trying to get logs from node kali-worker pod pod-5e469dfc-8287-40bf-bcb5-dbeb9e0123a0 container test-container: STEP: delete the pod May 14 11:17:28.848: INFO: Waiting for pod pod-5e469dfc-8287-40bf-bcb5-dbeb9e0123a0 to disappear May 14 11:17:28.987: INFO: Pod pod-5e469dfc-8287-40bf-bcb5-dbeb9e0123a0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:17:28.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-378" for this suite. • [SLOW TEST:13.862 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":71,"skipped":986,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:17:28.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 14 11:17:29.338: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:17:34.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-423" for this suite. • [SLOW TEST:5.089 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":275,"completed":72,"skipped":996,"failed":0} SSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:17:34.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:17:35.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9510" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":73,"skipped":1001,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:17:35.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-310 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-310;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-310 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-310;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-310.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-310.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-310.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-310.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-310.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-310.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-310.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-310.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-310.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-310.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-310.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-310.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-310.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 130.164.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.164.130_udp@PTR;check="$$(dig +tcp +noall +answer +search 130.164.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.164.130_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-310 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-310;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-310 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-310;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-310.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-310.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-310.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-310.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-310.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-310.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-310.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-310.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-310.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-310.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-310.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-310.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-310.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 130.164.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.164.130_udp@PTR;check="$$(dig +tcp +noall +answer +search 130.164.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.164.130_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 14 11:17:47.552: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:47.555: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:47.558: INFO: Unable to read wheezy_udp@dns-test-service.dns-310 from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:47.560: INFO: Unable to read wheezy_tcp@dns-test-service.dns-310 from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:47.562: INFO: Unable to read wheezy_udp@dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:47.564: INFO: Unable to read wheezy_tcp@dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:47.567: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:47.569: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:47.587: INFO: Unable to read jessie_udp@dns-test-service from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:47.590: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:47.592: INFO: Unable to read jessie_udp@dns-test-service.dns-310 from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:47.595: INFO: Unable to read jessie_tcp@dns-test-service.dns-310 from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:47.598: INFO: Unable to read jessie_udp@dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:47.600: INFO: Unable to read jessie_tcp@dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:47.603: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:47.606: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:47.624: INFO: Lookups using dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-310 wheezy_tcp@dns-test-service.dns-310 wheezy_udp@dns-test-service.dns-310.svc wheezy_tcp@dns-test-service.dns-310.svc wheezy_udp@_http._tcp.dns-test-service.dns-310.svc wheezy_tcp@_http._tcp.dns-test-service.dns-310.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-310 jessie_tcp@dns-test-service.dns-310 jessie_udp@dns-test-service.dns-310.svc jessie_tcp@dns-test-service.dns-310.svc jessie_udp@_http._tcp.dns-test-service.dns-310.svc jessie_tcp@_http._tcp.dns-test-service.dns-310.svc] May 14 11:17:52.627: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:52.631: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:52.634: INFO: Unable to read wheezy_udp@dns-test-service.dns-310 from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:52.636: INFO: Unable to read wheezy_tcp@dns-test-service.dns-310 from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:52.639: INFO: Unable to read wheezy_udp@dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:52.641: INFO: Unable to read wheezy_tcp@dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:52.643: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:52.646: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:52.663: INFO: Unable to read jessie_udp@dns-test-service from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:52.665: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:52.667: INFO: Unable to read jessie_udp@dns-test-service.dns-310 from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:52.670: INFO: Unable to read jessie_tcp@dns-test-service.dns-310 from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:52.672: INFO: Unable to read jessie_udp@dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:52.675: INFO: Unable to read jessie_tcp@dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:52.677: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:52.679: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:52.694: INFO: Lookups using dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-310 wheezy_tcp@dns-test-service.dns-310 wheezy_udp@dns-test-service.dns-310.svc wheezy_tcp@dns-test-service.dns-310.svc wheezy_udp@_http._tcp.dns-test-service.dns-310.svc wheezy_tcp@_http._tcp.dns-test-service.dns-310.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-310 jessie_tcp@dns-test-service.dns-310 jessie_udp@dns-test-service.dns-310.svc jessie_tcp@dns-test-service.dns-310.svc jessie_udp@_http._tcp.dns-test-service.dns-310.svc jessie_tcp@_http._tcp.dns-test-service.dns-310.svc] May 14 11:17:57.628: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:57.631: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:57.634: INFO: Unable to read wheezy_udp@dns-test-service.dns-310 from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:57.637: INFO: Unable to read wheezy_tcp@dns-test-service.dns-310 from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:57.640: INFO: Unable to read wheezy_udp@dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:57.642: INFO: Unable to read wheezy_tcp@dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:57.645: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:57.647: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:57.663: INFO: Unable to read jessie_udp@dns-test-service from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:57.666: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:57.668: INFO: Unable to read jessie_udp@dns-test-service.dns-310 from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:57.671: INFO: Unable to read jessie_tcp@dns-test-service.dns-310 from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:57.673: INFO: Unable to read jessie_udp@dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:57.676: INFO: Unable to read jessie_tcp@dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:57.678: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:57.680: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:17:57.697: INFO: Lookups using dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-310 wheezy_tcp@dns-test-service.dns-310 wheezy_udp@dns-test-service.dns-310.svc wheezy_tcp@dns-test-service.dns-310.svc wheezy_udp@_http._tcp.dns-test-service.dns-310.svc wheezy_tcp@_http._tcp.dns-test-service.dns-310.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-310 jessie_tcp@dns-test-service.dns-310 jessie_udp@dns-test-service.dns-310.svc jessie_tcp@dns-test-service.dns-310.svc jessie_udp@_http._tcp.dns-test-service.dns-310.svc jessie_tcp@_http._tcp.dns-test-service.dns-310.svc] May 14 11:18:02.933: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:02.936: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:02.939: INFO: Unable to read wheezy_udp@dns-test-service.dns-310 from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:02.941: INFO: Unable to read wheezy_tcp@dns-test-service.dns-310 from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:02.944: INFO: Unable to read wheezy_udp@dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:02.946: INFO: Unable to read wheezy_tcp@dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:02.948: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:02.951: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:02.970: INFO: Unable to read jessie_udp@dns-test-service from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:02.973: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:02.975: INFO: Unable to read jessie_udp@dns-test-service.dns-310 from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:02.978: INFO: Unable to read jessie_tcp@dns-test-service.dns-310 from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:02.980: INFO: Unable to read jessie_udp@dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:02.982: INFO: Unable to read jessie_tcp@dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:02.985: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:02.987: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:03.004: INFO: Lookups using dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-310 wheezy_tcp@dns-test-service.dns-310 wheezy_udp@dns-test-service.dns-310.svc wheezy_tcp@dns-test-service.dns-310.svc wheezy_udp@_http._tcp.dns-test-service.dns-310.svc wheezy_tcp@_http._tcp.dns-test-service.dns-310.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-310 jessie_tcp@dns-test-service.dns-310 jessie_udp@dns-test-service.dns-310.svc jessie_tcp@dns-test-service.dns-310.svc jessie_udp@_http._tcp.dns-test-service.dns-310.svc jessie_tcp@_http._tcp.dns-test-service.dns-310.svc] May 14 11:18:07.629: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:07.632: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:07.635: INFO: Unable to read wheezy_udp@dns-test-service.dns-310 from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:07.638: INFO: Unable to read wheezy_tcp@dns-test-service.dns-310 from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:07.640: INFO: Unable to read wheezy_udp@dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:07.643: INFO: Unable to read wheezy_tcp@dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:07.646: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:07.649: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:07.668: INFO: Unable to read jessie_udp@dns-test-service from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:07.671: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:07.673: INFO: Unable to read jessie_udp@dns-test-service.dns-310 from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:07.676: INFO: Unable to read jessie_tcp@dns-test-service.dns-310 from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:07.678: INFO: Unable to read jessie_udp@dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:07.681: INFO: Unable to read jessie_tcp@dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:07.684: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:07.686: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:07.703: INFO: Lookups using dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-310 wheezy_tcp@dns-test-service.dns-310 wheezy_udp@dns-test-service.dns-310.svc wheezy_tcp@dns-test-service.dns-310.svc wheezy_udp@_http._tcp.dns-test-service.dns-310.svc wheezy_tcp@_http._tcp.dns-test-service.dns-310.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-310 jessie_tcp@dns-test-service.dns-310 jessie_udp@dns-test-service.dns-310.svc jessie_tcp@dns-test-service.dns-310.svc jessie_udp@_http._tcp.dns-test-service.dns-310.svc jessie_tcp@_http._tcp.dns-test-service.dns-310.svc] May 14 11:18:12.628: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:12.631: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:12.634: INFO: Unable to read wheezy_udp@dns-test-service.dns-310 from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:12.637: INFO: Unable to read wheezy_tcp@dns-test-service.dns-310 from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:12.640: INFO: Unable to read wheezy_udp@dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:12.643: INFO: Unable to read wheezy_tcp@dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:12.645: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:12.648: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:12.666: INFO: Unable to read jessie_udp@dns-test-service from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:12.668: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:12.671: INFO: Unable to read jessie_udp@dns-test-service.dns-310 from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:12.673: INFO: Unable to read jessie_tcp@dns-test-service.dns-310 from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:12.675: INFO: Unable to read jessie_udp@dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:12.679: INFO: Unable to read jessie_tcp@dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:12.682: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:12.685: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-310.svc from pod dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be: the server could not find the requested resource (get pods dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be) May 14 11:18:12.703: INFO: Lookups using dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-310 wheezy_tcp@dns-test-service.dns-310 wheezy_udp@dns-test-service.dns-310.svc wheezy_tcp@dns-test-service.dns-310.svc wheezy_udp@_http._tcp.dns-test-service.dns-310.svc wheezy_tcp@_http._tcp.dns-test-service.dns-310.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-310 jessie_tcp@dns-test-service.dns-310 jessie_udp@dns-test-service.dns-310.svc jessie_tcp@dns-test-service.dns-310.svc jessie_udp@_http._tcp.dns-test-service.dns-310.svc jessie_tcp@_http._tcp.dns-test-service.dns-310.svc] May 14 11:18:17.691: INFO: DNS probes using dns-310/dns-test-a04b12bb-d833-48d2-86ce-71a6113cb6be succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:18:20.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-310" for this suite. • [SLOW TEST:45.809 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":74,"skipped":1022,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:18:21.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 May 14 11:18:23.038: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 14 11:18:23.290: INFO: Waiting for terminating namespaces to be deleted... May 14 11:18:23.292: INFO: Logging pods the kubelet thinks is on node kali-worker before test May 14 11:18:23.297: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 14 11:18:23.297: INFO: Container kindnet-cni ready: true, restart count 1 May 14 11:18:23.297: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 14 11:18:23.297: INFO: Container kube-proxy ready: true, restart count 0 May 14 11:18:23.297: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test May 14 11:18:23.314: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 14 11:18:23.314: INFO: Container kindnet-cni ready: true, restart count 0 May 14 11:18:23.314: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 14 11:18:23.314: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-34a1d2d1-bf5d-4cac-87a3-438c025c180d 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-34a1d2d1-bf5d-4cac-87a3-438c025c180d off the node kali-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-34a1d2d1-bf5d-4cac-87a3-438c025c180d [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:19:08.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8373" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:47.166 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":75,"skipped":1024,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:19:08.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC May 14 11:19:08.249: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9366' May 14 11:19:08.548: INFO: stderr: "" May 14 11:19:08.549: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 14 11:19:09.552: INFO: Selector matched 1 pods for map[app:agnhost] May 14 11:19:09.552: INFO: Found 0 / 1 May 14 11:19:10.647: INFO: Selector matched 1 pods for map[app:agnhost] May 14 11:19:10.647: INFO: Found 0 / 1 May 14 11:19:11.592: INFO: Selector matched 1 pods for map[app:agnhost] May 14 11:19:11.592: INFO: Found 0 / 1 May 14 11:19:12.551: INFO: Selector matched 1 pods for map[app:agnhost] May 14 11:19:12.551: INFO: Found 0 / 1 May 14 11:19:13.805: INFO: Selector matched 1 pods for map[app:agnhost] May 14 11:19:13.805: INFO: Found 0 / 1 May 14 11:19:14.600: INFO: Selector matched 1 pods for map[app:agnhost] May 14 11:19:14.600: INFO: Found 0 / 1 May 14 11:19:15.590: INFO: Selector matched 1 pods for map[app:agnhost] May 14 11:19:15.590: INFO: Found 0 / 1 May 14 11:19:16.874: INFO: Selector matched 1 pods for map[app:agnhost] May 14 11:19:16.874: INFO: Found 0 / 1 May 14 11:19:18.885: INFO: Selector matched 1 pods for map[app:agnhost] May 14 11:19:18.885: INFO: Found 0 / 1 May 14 11:19:19.611: INFO: Selector matched 1 pods for map[app:agnhost] May 14 11:19:19.611: INFO: Found 1 / 1 May 14 11:19:19.611: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 14 11:19:19.627: INFO: Selector matched 1 pods for map[app:agnhost] May 14 11:19:19.627: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 14 11:19:19.627: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config patch pod agnhost-master-gnz7r --namespace=kubectl-9366 -p {"metadata":{"annotations":{"x":"y"}}}' May 14 11:19:19.714: INFO: stderr: "" May 14 11:19:19.714: INFO: stdout: "pod/agnhost-master-gnz7r patched\n" STEP: checking annotations May 14 11:19:21.078: INFO: Selector matched 1 pods for map[app:agnhost] May 14 11:19:21.078: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:19:21.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9366" for this suite. • [SLOW TEST:12.932 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":275,"completed":76,"skipped":1036,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:19:21.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 14 11:19:21.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 14 11:19:25.899: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-14T11:19:25Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-14T11:19:25Z]] name:name1 resourceVersion:4273230 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:64233d15-5101-41ba-8be1-828876ba85ba] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 14 11:19:36.657: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-14T11:19:35Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-14T11:19:35Z]] name:name2 resourceVersion:4273264 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:49433f45-3d55-42e3-81c4-11118e3a506e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 14 11:19:46.663: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-14T11:19:25Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-14T11:19:46Z]] name:name1 resourceVersion:4273296 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:64233d15-5101-41ba-8be1-828876ba85ba] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 14 11:19:56.694: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-14T11:19:35Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-14T11:19:56Z]] name:name2 resourceVersion:4273330 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:49433f45-3d55-42e3-81c4-11118e3a506e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 14 11:20:06.699: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-14T11:19:25Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-14T11:19:46Z]] name:name1 resourceVersion:4273360 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:64233d15-5101-41ba-8be1-828876ba85ba] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 14 11:20:16.706: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-14T11:19:35Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-14T11:19:56Z]] name:name2 resourceVersion:4273390 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:49433f45-3d55-42e3-81c4-11118e3a506e] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:20:27.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-8493" for this suite. • [SLOW TEST:66.110 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":77,"skipped":1051,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:20:27.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 14 11:20:27.337: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 14 11:20:27.343: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:27.362: INFO: Number of nodes with available pods: 0 May 14 11:20:27.362: INFO: Node kali-worker is running more than one daemon pod May 14 11:20:28.366: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:28.369: INFO: Number of nodes with available pods: 0 May 14 11:20:28.369: INFO: Node kali-worker is running more than one daemon pod May 14 11:20:29.635: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:29.637: INFO: Number of nodes with available pods: 0 May 14 11:20:29.637: INFO: Node kali-worker is running more than one daemon pod May 14 11:20:30.437: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:30.461: INFO: Number of nodes with available pods: 0 May 14 11:20:30.461: INFO: Node kali-worker is running more than one daemon pod May 14 11:20:31.366: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:31.369: INFO: Number of nodes with available pods: 0 May 14 11:20:31.369: INFO: Node kali-worker is running more than one daemon pod May 14 11:20:32.366: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:32.388: INFO: Number of nodes with available pods: 1 May 14 11:20:32.388: INFO: Node kali-worker is running more than one daemon pod May 14 11:20:33.366: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:33.370: INFO: Number of nodes with available pods: 2 May 14 11:20:33.370: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 14 11:20:33.401: INFO: Wrong image for pod: daemon-set-x7ftf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 14 11:20:33.401: INFO: Wrong image for pod: daemon-set-xz9ww. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 14 11:20:33.418: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:34.422: INFO: Wrong image for pod: daemon-set-x7ftf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 14 11:20:34.422: INFO: Wrong image for pod: daemon-set-xz9ww. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 14 11:20:34.426: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:35.422: INFO: Wrong image for pod: daemon-set-x7ftf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 14 11:20:35.422: INFO: Wrong image for pod: daemon-set-xz9ww. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 14 11:20:35.425: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:36.422: INFO: Wrong image for pod: daemon-set-x7ftf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 14 11:20:36.422: INFO: Wrong image for pod: daemon-set-xz9ww. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 14 11:20:36.425: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:37.422: INFO: Wrong image for pod: daemon-set-x7ftf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 14 11:20:37.422: INFO: Pod daemon-set-x7ftf is not available May 14 11:20:37.422: INFO: Wrong image for pod: daemon-set-xz9ww. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 14 11:20:37.426: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:38.422: INFO: Pod daemon-set-txtkr is not available May 14 11:20:38.422: INFO: Wrong image for pod: daemon-set-xz9ww. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 14 11:20:38.425: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:39.421: INFO: Pod daemon-set-txtkr is not available May 14 11:20:39.421: INFO: Wrong image for pod: daemon-set-xz9ww. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 14 11:20:39.423: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:40.761: INFO: Pod daemon-set-txtkr is not available May 14 11:20:40.761: INFO: Wrong image for pod: daemon-set-xz9ww. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 14 11:20:40.805: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:41.575: INFO: Pod daemon-set-txtkr is not available May 14 11:20:41.575: INFO: Wrong image for pod: daemon-set-xz9ww. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 14 11:20:41.590: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:42.421: INFO: Pod daemon-set-txtkr is not available May 14 11:20:42.422: INFO: Wrong image for pod: daemon-set-xz9ww. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 14 11:20:42.425: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:43.423: INFO: Wrong image for pod: daemon-set-xz9ww. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 14 11:20:43.427: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:44.422: INFO: Wrong image for pod: daemon-set-xz9ww. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 14 11:20:44.425: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:45.423: INFO: Wrong image for pod: daemon-set-xz9ww. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 14 11:20:45.428: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:46.422: INFO: Wrong image for pod: daemon-set-xz9ww. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 14 11:20:46.422: INFO: Pod daemon-set-xz9ww is not available May 14 11:20:46.425: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:47.421: INFO: Wrong image for pod: daemon-set-xz9ww. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 14 11:20:47.421: INFO: Pod daemon-set-xz9ww is not available May 14 11:20:47.424: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:48.422: INFO: Wrong image for pod: daemon-set-xz9ww. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 14 11:20:48.422: INFO: Pod daemon-set-xz9ww is not available May 14 11:20:48.425: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:49.422: INFO: Wrong image for pod: daemon-set-xz9ww. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 14 11:20:49.422: INFO: Pod daemon-set-xz9ww is not available May 14 11:20:49.426: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:50.422: INFO: Wrong image for pod: daemon-set-xz9ww. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 14 11:20:50.422: INFO: Pod daemon-set-xz9ww is not available May 14 11:20:50.424: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:51.422: INFO: Wrong image for pod: daemon-set-xz9ww. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 14 11:20:51.422: INFO: Pod daemon-set-xz9ww is not available May 14 11:20:51.426: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:52.422: INFO: Wrong image for pod: daemon-set-xz9ww. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 14 11:20:52.422: INFO: Pod daemon-set-xz9ww is not available May 14 11:20:52.425: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:53.431: INFO: Wrong image for pod: daemon-set-xz9ww. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 14 11:20:53.431: INFO: Pod daemon-set-xz9ww is not available May 14 11:20:53.434: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:54.422: INFO: Pod daemon-set-hvqp4 is not available May 14 11:20:54.425: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 14 11:20:54.429: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:54.431: INFO: Number of nodes with available pods: 1 May 14 11:20:54.432: INFO: Node kali-worker is running more than one daemon pod May 14 11:20:55.558: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:55.561: INFO: Number of nodes with available pods: 1 May 14 11:20:55.561: INFO: Node kali-worker is running more than one daemon pod May 14 11:20:56.478: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:56.482: INFO: Number of nodes with available pods: 1 May 14 11:20:56.482: INFO: Node kali-worker is running more than one daemon pod May 14 11:20:57.435: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:57.438: INFO: Number of nodes with available pods: 1 May 14 11:20:57.438: INFO: Node kali-worker is running more than one daemon pod May 14 11:20:58.435: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:58.437: INFO: Number of nodes with available pods: 1 May 14 11:20:58.437: INFO: Node kali-worker is running more than one daemon pod May 14 11:20:59.436: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:20:59.439: INFO: Number of nodes with available pods: 1 May 14 11:20:59.439: INFO: Node kali-worker is running more than one daemon pod May 14 11:21:00.436: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:21:00.438: INFO: Number of nodes with available pods: 1 May 14 11:21:00.438: INFO: Node kali-worker is running more than one daemon pod May 14 11:21:01.436: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:21:01.439: INFO: Number of nodes with available pods: 1 May 14 11:21:01.439: INFO: Node kali-worker is running more than one daemon pod May 14 11:21:03.702: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:21:03.899: INFO: Number of nodes with available pods: 1 May 14 11:21:03.899: INFO: Node kali-worker is running more than one daemon pod May 14 11:21:05.192: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:21:05.288: INFO: Number of nodes with available pods: 1 May 14 11:21:05.288: INFO: Node kali-worker is running more than one daemon pod May 14 11:21:05.869: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:21:05.872: INFO: Number of nodes with available pods: 1 May 14 11:21:05.872: INFO: Node kali-worker is running more than one daemon pod May 14 11:21:07.049: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:21:07.052: INFO: Number of nodes with available pods: 1 May 14 11:21:07.052: INFO: Node kali-worker is running more than one daemon pod May 14 11:21:07.436: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:21:07.439: INFO: Number of nodes with available pods: 1 May 14 11:21:07.439: INFO: Node kali-worker is running more than one daemon pod May 14 11:21:09.266: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:21:09.268: INFO: Number of nodes with available pods: 1 May 14 11:21:09.268: INFO: Node kali-worker is running more than one daemon pod May 14 11:21:09.473: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:21:09.503: INFO: Number of nodes with available pods: 1 May 14 11:21:09.503: INFO: Node kali-worker is running more than one daemon pod May 14 11:21:10.436: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:21:10.439: INFO: Number of nodes with available pods: 1 May 14 11:21:10.439: INFO: Node kali-worker is running more than one daemon pod May 14 11:21:11.437: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:21:11.439: INFO: Number of nodes with available pods: 1 May 14 11:21:11.439: INFO: Node kali-worker is running more than one daemon pod May 14 11:21:12.435: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:21:12.437: INFO: Number of nodes with available pods: 2 May 14 11:21:12.437: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1084, will wait for the garbage collector to delete the pods May 14 11:21:12.505: INFO: Deleting DaemonSet.extensions daemon-set took: 6.415085ms May 14 11:21:12.905: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.220849ms May 14 11:21:24.551: INFO: Number of nodes with available pods: 0 May 14 11:21:24.551: INFO: Number of running nodes: 0, number of available pods: 0 May 14 11:21:24.555: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1084/daemonsets","resourceVersion":"4273662"},"items":null} May 14 11:21:24.558: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1084/pods","resourceVersion":"4273662"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:21:24.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1084" for this suite. • [SLOW TEST:57.352 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":78,"skipped":1069,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:21:24.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-493 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-493 STEP: Creating statefulset with conflicting port in namespace statefulset-493 STEP: Waiting until pod test-pod will start running in namespace statefulset-493 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-493 May 14 11:21:32.177: INFO: Observed stateful pod in namespace: statefulset-493, name: ss-0, uid: 5bf329c4-6a5c-4e5b-9570-19ef4473ff8a, status phase: Failed. Waiting for statefulset controller to delete. May 14 11:21:32.336: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-493 STEP: Removing pod with conflicting port in namespace statefulset-493 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-493 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 May 14 11:21:41.546: INFO: Deleting all statefulset in ns statefulset-493 May 14 11:21:41.549: INFO: Scaling statefulset ss to 0 May 14 11:22:01.596: INFO: Waiting for statefulset status.replicas updated to 0 May 14 11:22:01.599: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:22:01.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-493" for this suite. • [SLOW TEST:37.039 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":79,"skipped":1087,"failed":0} SS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:22:01.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars May 14 11:22:01.654: INFO: Waiting up to 5m0s for pod "downward-api-4fee33a8-a595-46bd-b14e-87b372a0b7ca" in namespace "downward-api-3715" to be "Succeeded or Failed" May 14 11:22:01.669: INFO: Pod "downward-api-4fee33a8-a595-46bd-b14e-87b372a0b7ca": Phase="Pending", Reason="", readiness=false. Elapsed: 15.419506ms May 14 11:22:03.742: INFO: Pod "downward-api-4fee33a8-a595-46bd-b14e-87b372a0b7ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088250601s May 14 11:22:06.066: INFO: Pod "downward-api-4fee33a8-a595-46bd-b14e-87b372a0b7ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.412644094s May 14 11:22:08.106: INFO: Pod "downward-api-4fee33a8-a595-46bd-b14e-87b372a0b7ca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.451741661s May 14 11:22:10.162: INFO: Pod "downward-api-4fee33a8-a595-46bd-b14e-87b372a0b7ca": Phase="Pending", Reason="", readiness=false. Elapsed: 8.508079857s May 14 11:22:12.164: INFO: Pod "downward-api-4fee33a8-a595-46bd-b14e-87b372a0b7ca": Phase="Pending", Reason="", readiness=false. Elapsed: 10.510391329s May 14 11:22:14.168: INFO: Pod "downward-api-4fee33a8-a595-46bd-b14e-87b372a0b7ca": Phase="Pending", Reason="", readiness=false. Elapsed: 12.514186139s May 14 11:22:18.692: INFO: Pod "downward-api-4fee33a8-a595-46bd-b14e-87b372a0b7ca": Phase="Pending", Reason="", readiness=false. Elapsed: 17.038020228s May 14 11:22:20.696: INFO: Pod "downward-api-4fee33a8-a595-46bd-b14e-87b372a0b7ca": Phase="Pending", Reason="", readiness=false. Elapsed: 19.042125667s May 14 11:22:23.061: INFO: Pod "downward-api-4fee33a8-a595-46bd-b14e-87b372a0b7ca": Phase="Pending", Reason="", readiness=false. Elapsed: 21.407034717s May 14 11:22:25.371: INFO: Pod "downward-api-4fee33a8-a595-46bd-b14e-87b372a0b7ca": Phase="Running", Reason="", readiness=true. Elapsed: 23.717058425s May 14 11:22:27.784: INFO: Pod "downward-api-4fee33a8-a595-46bd-b14e-87b372a0b7ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.129882369s STEP: Saw pod success May 14 11:22:27.784: INFO: Pod "downward-api-4fee33a8-a595-46bd-b14e-87b372a0b7ca" satisfied condition "Succeeded or Failed" May 14 11:22:27.788: INFO: Trying to get logs from node kali-worker2 pod downward-api-4fee33a8-a595-46bd-b14e-87b372a0b7ca container dapi-container: STEP: delete the pod May 14 11:22:28.063: INFO: Waiting for pod downward-api-4fee33a8-a595-46bd-b14e-87b372a0b7ca to disappear May 14 11:22:28.091: INFO: Pod downward-api-4fee33a8-a595-46bd-b14e-87b372a0b7ca no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:22:28.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3715" for this suite. • [SLOW TEST:26.551 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":80,"skipped":1089,"failed":0} S ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:22:28.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 14 11:22:40.451: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:22:41.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8032" for this suite. • [SLOW TEST:13.339 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":81,"skipped":1090,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:22:41.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod May 14 11:23:04.185: INFO: Successfully updated pod "labelsupdate3c6965d3-69dc-48d8-b906-6ddcc9d48562" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:23:06.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8516" for this suite. • [SLOW TEST:24.727 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":82,"skipped":1112,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:23:06.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's command May 14 11:23:06.299: INFO: Waiting up to 5m0s for pod "var-expansion-980f361d-5558-4b13-b79a-8c7ab9c169bb" in namespace "var-expansion-4765" to be "Succeeded or Failed" May 14 11:23:06.305: INFO: Pod "var-expansion-980f361d-5558-4b13-b79a-8c7ab9c169bb": Phase="Pending", Reason="", readiness=false. Elapsed: 5.403745ms May 14 11:23:08.308: INFO: Pod "var-expansion-980f361d-5558-4b13-b79a-8c7ab9c169bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008443858s May 14 11:23:10.779: INFO: Pod "var-expansion-980f361d-5558-4b13-b79a-8c7ab9c169bb": Phase="Running", Reason="", readiness=true. Elapsed: 4.479720855s May 14 11:23:12.782: INFO: Pod "var-expansion-980f361d-5558-4b13-b79a-8c7ab9c169bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.482701612s STEP: Saw pod success May 14 11:23:12.782: INFO: Pod "var-expansion-980f361d-5558-4b13-b79a-8c7ab9c169bb" satisfied condition "Succeeded or Failed" May 14 11:23:12.784: INFO: Trying to get logs from node kali-worker pod var-expansion-980f361d-5558-4b13-b79a-8c7ab9c169bb container dapi-container: STEP: delete the pod May 14 11:23:12.839: INFO: Waiting for pod var-expansion-980f361d-5558-4b13-b79a-8c7ab9c169bb to disappear May 14 11:23:12.850: INFO: Pod var-expansion-980f361d-5558-4b13-b79a-8c7ab9c169bb no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:23:12.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4765" for this suite. • [SLOW TEST:6.701 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":83,"skipped":1145,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:23:12.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:23:13.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1290" for this suite. STEP: Destroying namespace "nspatchtest-0d693c54-0d12-47d8-aa3c-6c04a4b75a4a-2628" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":84,"skipped":1154,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:23:13.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 14 11:23:13.298: INFO: Creating deployment "test-recreate-deployment" May 14 11:23:13.307: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 14 11:23:13.390: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 14 11:23:15.982: INFO: Waiting deployment "test-recreate-deployment" to complete May 14 11:23:15.985: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052193, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052193, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052193, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052193, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 11:23:17.988: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 14 11:23:18.010: INFO: Updating deployment test-recreate-deployment May 14 11:23:18.010: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 May 14 11:23:19.632: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-6961 /apis/apps/v1/namespaces/deployment-6961/deployments/test-recreate-deployment 9ebb925e-9429-4c9b-9298-980a00fc029d 4274277 2 2020-05-14 11:23:13 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-14 11:23:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-14 11:23:19 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000820e28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-14 11:23:18 +0000 UTC,LastTransitionTime:2020-05-14 11:23:18 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-05-14 11:23:19 +0000 UTC,LastTransitionTime:2020-05-14 11:23:13 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 14 11:23:19.635: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7 deployment-6961 /apis/apps/v1/namespaces/deployment-6961/replicasets/test-recreate-deployment-d5667d9c7 49685bc6-2e06-4e26-b466-8f3b69659bab 4274275 1 2020-05-14 11:23:18 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 9ebb925e-9429-4c9b-9298-980a00fc029d 0xc002315200 0xc002315201}] [] [{kube-controller-manager Update apps/v1 2020-05-14 11:23:19 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 101 98 98 57 50 53 101 45 57 52 50 57 45 52 99 57 98 45 57 50 57 56 45 57 56 48 97 48 48 102 99 48 50 57 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002315278 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 14 11:23:19.635: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 14 11:23:19.635: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-74d98b5f7c deployment-6961 /apis/apps/v1/namespaces/deployment-6961/replicasets/test-recreate-deployment-74d98b5f7c 5d1f3e71-cce5-4eeb-b8e1-99b56c8b778f 4274261 2 2020-05-14 11:23:13 +0000 UTC map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 9ebb925e-9429-4c9b-9298-980a00fc029d 0xc002315107 0xc002315108}] [] [{kube-controller-manager Update apps/v1 2020-05-14 11:23:18 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 101 98 98 57 50 53 101 45 57 52 50 57 45 52 99 57 98 45 57 50 57 56 45 57 56 48 97 48 48 102 99 48 50 57 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 74d98b5f7c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002315198 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 14 11:23:19.638: INFO: Pod "test-recreate-deployment-d5667d9c7-7586f" is not available: &Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-7586f test-recreate-deployment-d5667d9c7- deployment-6961 /api/v1/namespaces/deployment-6961/pods/test-recreate-deployment-d5667d9c7-7586f 96065d17-3fc8-4ed9-adab-1c49a8a7d6f6 4274273 0 2020-05-14 11:23:18 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 49685bc6-2e06-4e26-b466-8f3b69659bab 0xc000821250 0xc000821251}] [] [{kube-controller-manager Update v1 2020-05-14 11:23:18 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 57 54 56 53 98 99 54 45 50 101 48 54 45 52 101 50 54 45 98 52 54 54 45 56 102 51 98 54 57 54 53 57 98 97 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-14 11:23:19 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rfms4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rfms4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rfms4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:23:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:23:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:23:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:23:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-14 11:23:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:23:19.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6961" for this suite. • [SLOW TEST:6.539 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":85,"skipped":1163,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:23:19.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 14 11:23:25.032: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:23:26.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6186" for this suite. • [SLOW TEST:6.425 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":86,"skipped":1173,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:23:26.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 14 11:23:26.345: INFO: Waiting up to 5m0s for pod "downwardapi-volume-adc6a754-5959-45d2-b559-565bdf49a2de" in namespace "projected-4333" to be "Succeeded or Failed" May 14 11:23:26.485: INFO: Pod "downwardapi-volume-adc6a754-5959-45d2-b559-565bdf49a2de": Phase="Pending", Reason="", readiness=false. Elapsed: 139.882633ms May 14 11:23:28.532: INFO: Pod "downwardapi-volume-adc6a754-5959-45d2-b559-565bdf49a2de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.187466365s May 14 11:23:30.706: INFO: Pod "downwardapi-volume-adc6a754-5959-45d2-b559-565bdf49a2de": Phase="Pending", Reason="", readiness=false. Elapsed: 4.361458263s May 14 11:23:32.766: INFO: Pod "downwardapi-volume-adc6a754-5959-45d2-b559-565bdf49a2de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.421096839s STEP: Saw pod success May 14 11:23:32.766: INFO: Pod "downwardapi-volume-adc6a754-5959-45d2-b559-565bdf49a2de" satisfied condition "Succeeded or Failed" May 14 11:23:32.768: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-adc6a754-5959-45d2-b559-565bdf49a2de container client-container: STEP: delete the pod May 14 11:23:32.806: INFO: Waiting for pod downwardapi-volume-adc6a754-5959-45d2-b559-565bdf49a2de to disappear May 14 11:23:32.820: INFO: Pod downwardapi-volume-adc6a754-5959-45d2-b559-565bdf49a2de no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:23:32.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4333" for this suite. • [SLOW TEST:6.757 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":87,"skipped":1198,"failed":0} SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:23:32.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-downwardapi-zgh2 STEP: Creating a pod to test atomic-volume-subpath May 14 11:23:33.858: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-zgh2" in namespace "subpath-2641" to be "Succeeded or Failed" May 14 11:23:33.869: INFO: Pod "pod-subpath-test-downwardapi-zgh2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.885095ms May 14 11:23:36.951: INFO: Pod "pod-subpath-test-downwardapi-zgh2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.092425205s May 14 11:23:39.015: INFO: Pod "pod-subpath-test-downwardapi-zgh2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.156159625s May 14 11:23:41.018: INFO: Pod "pod-subpath-test-downwardapi-zgh2": Phase="Running", Reason="", readiness=true. Elapsed: 7.159319352s May 14 11:23:43.022: INFO: Pod "pod-subpath-test-downwardapi-zgh2": Phase="Running", Reason="", readiness=true. Elapsed: 9.16310439s May 14 11:23:45.025: INFO: Pod "pod-subpath-test-downwardapi-zgh2": Phase="Running", Reason="", readiness=true. Elapsed: 11.166074571s May 14 11:23:47.028: INFO: Pod "pod-subpath-test-downwardapi-zgh2": Phase="Running", Reason="", readiness=true. Elapsed: 13.169298299s May 14 11:23:49.031: INFO: Pod "pod-subpath-test-downwardapi-zgh2": Phase="Running", Reason="", readiness=true. Elapsed: 15.172918073s May 14 11:23:51.035: INFO: Pod "pod-subpath-test-downwardapi-zgh2": Phase="Running", Reason="", readiness=true. Elapsed: 17.176343449s May 14 11:23:53.038: INFO: Pod "pod-subpath-test-downwardapi-zgh2": Phase="Running", Reason="", readiness=true. Elapsed: 19.179642844s May 14 11:23:55.041: INFO: Pod "pod-subpath-test-downwardapi-zgh2": Phase="Running", Reason="", readiness=true. Elapsed: 21.182693942s May 14 11:23:57.045: INFO: Pod "pod-subpath-test-downwardapi-zgh2": Phase="Running", Reason="", readiness=true. Elapsed: 23.186492406s May 14 11:23:59.048: INFO: Pod "pod-subpath-test-downwardapi-zgh2": Phase="Running", Reason="", readiness=true. Elapsed: 25.190023236s May 14 11:24:01.052: INFO: Pod "pod-subpath-test-downwardapi-zgh2": Phase="Running", Reason="", readiness=true. Elapsed: 27.193430478s May 14 11:24:03.055: INFO: Pod "pod-subpath-test-downwardapi-zgh2": Phase="Running", Reason="", readiness=true. Elapsed: 29.196757011s May 14 11:24:05.990: INFO: Pod "pod-subpath-test-downwardapi-zgh2": Phase="Running", Reason="", readiness=true. Elapsed: 32.131704436s May 14 11:24:08.803: INFO: Pod "pod-subpath-test-downwardapi-zgh2": Phase="Running", Reason="", readiness=true. Elapsed: 34.94407242s May 14 11:24:10.806: INFO: Pod "pod-subpath-test-downwardapi-zgh2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.947912185s STEP: Saw pod success May 14 11:24:10.806: INFO: Pod "pod-subpath-test-downwardapi-zgh2" satisfied condition "Succeeded or Failed" May 14 11:24:10.809: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-downwardapi-zgh2 container test-container-subpath-downwardapi-zgh2: STEP: delete the pod May 14 11:24:11.025: INFO: Waiting for pod pod-subpath-test-downwardapi-zgh2 to disappear May 14 11:24:11.037: INFO: Pod pod-subpath-test-downwardapi-zgh2 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-zgh2 May 14 11:24:11.037: INFO: Deleting pod "pod-subpath-test-downwardapi-zgh2" in namespace "subpath-2641" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:24:11.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2641" for this suite. • [SLOW TEST:38.217 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":88,"skipped":1202,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:24:11.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD May 14 11:24:11.839: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:24:29.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6937" for this suite. • [SLOW TEST:18.610 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":89,"skipped":1215,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:24:29.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 14 11:24:58.269: INFO: Successfully updated pod "adopt-release-8dr2x" STEP: Checking that the Job readopts the Pod May 14 11:24:58.269: INFO: Waiting up to 15m0s for pod "adopt-release-8dr2x" in namespace "job-9342" to be "adopted" May 14 11:24:58.323: INFO: Pod "adopt-release-8dr2x": Phase="Running", Reason="", readiness=true. Elapsed: 53.569088ms May 14 11:25:00.326: INFO: Pod "adopt-release-8dr2x": Phase="Running", Reason="", readiness=true. Elapsed: 2.056419075s May 14 11:25:00.326: INFO: Pod "adopt-release-8dr2x" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 14 11:25:00.834: INFO: Successfully updated pod "adopt-release-8dr2x" STEP: Checking that the Job releases the Pod May 14 11:25:00.834: INFO: Waiting up to 15m0s for pod "adopt-release-8dr2x" in namespace "job-9342" to be "released" May 14 11:25:00.851: INFO: Pod "adopt-release-8dr2x": Phase="Running", Reason="", readiness=true. Elapsed: 17.382759ms May 14 11:25:03.357: INFO: Pod "adopt-release-8dr2x": Phase="Running", Reason="", readiness=true. Elapsed: 2.523192597s May 14 11:25:03.357: INFO: Pod "adopt-release-8dr2x" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:25:03.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9342" for this suite. • [SLOW TEST:33.841 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":90,"skipped":1249,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:25:03.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-9533 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-9533 I0514 11:25:04.405078 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-9533, replica count: 2 I0514 11:25:07.455611 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 11:25:10.455776 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 14 11:25:10.455: INFO: Creating new exec pod May 14 11:25:19.542: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-9533 execpodphv92 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 14 11:25:23.151: INFO: stderr: "I0514 11:25:23.051685 1030 log.go:172] (0xc00003a4d0) (0xc0002b4be0) Create stream\nI0514 11:25:23.051726 1030 log.go:172] (0xc00003a4d0) (0xc0002b4be0) Stream added, broadcasting: 1\nI0514 11:25:23.054324 1030 log.go:172] (0xc00003a4d0) Reply frame received for 1\nI0514 11:25:23.054400 1030 log.go:172] (0xc00003a4d0) (0xc000666460) Create stream\nI0514 11:25:23.054421 1030 log.go:172] (0xc00003a4d0) (0xc000666460) Stream added, broadcasting: 3\nI0514 11:25:23.055467 1030 log.go:172] (0xc00003a4d0) Reply frame received for 3\nI0514 11:25:23.055498 1030 log.go:172] (0xc00003a4d0) (0xc000666500) Create stream\nI0514 11:25:23.055508 1030 log.go:172] (0xc00003a4d0) (0xc000666500) Stream added, broadcasting: 5\nI0514 11:25:23.056443 1030 log.go:172] (0xc00003a4d0) Reply frame received for 5\nI0514 11:25:23.139942 1030 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0514 11:25:23.139964 1030 log.go:172] (0xc000666500) (5) Data frame handling\nI0514 11:25:23.139977 1030 log.go:172] (0xc000666500) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0514 11:25:23.144014 1030 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0514 11:25:23.144034 1030 log.go:172] (0xc000666500) (5) Data frame handling\nI0514 11:25:23.144044 1030 log.go:172] (0xc000666500) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0514 11:25:23.144280 1030 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0514 11:25:23.144303 1030 log.go:172] (0xc000666500) (5) Data frame handling\nI0514 11:25:23.144326 1030 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0514 11:25:23.144342 1030 log.go:172] (0xc000666460) (3) Data frame handling\nI0514 11:25:23.146344 1030 log.go:172] (0xc00003a4d0) Data frame received for 1\nI0514 11:25:23.146363 1030 log.go:172] (0xc0002b4be0) (1) Data frame handling\nI0514 11:25:23.146380 1030 log.go:172] (0xc0002b4be0) (1) Data frame sent\nI0514 11:25:23.146392 1030 log.go:172] (0xc00003a4d0) (0xc0002b4be0) Stream removed, broadcasting: 1\nI0514 11:25:23.146599 1030 log.go:172] (0xc00003a4d0) Go away received\nI0514 11:25:23.146669 1030 log.go:172] (0xc00003a4d0) (0xc0002b4be0) Stream removed, broadcasting: 1\nI0514 11:25:23.146680 1030 log.go:172] (0xc00003a4d0) (0xc000666460) Stream removed, broadcasting: 3\nI0514 11:25:23.146689 1030 log.go:172] (0xc00003a4d0) (0xc000666500) Stream removed, broadcasting: 5\n" May 14 11:25:23.151: INFO: stdout: "" May 14 11:25:23.152: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-9533 execpodphv92 -- /bin/sh -x -c nc -zv -t -w 2 10.98.152.36 80' May 14 11:25:23.325: INFO: stderr: "I0514 11:25:23.260626 1060 log.go:172] (0xc00088ad10) (0xc00070c320) Create stream\nI0514 11:25:23.260671 1060 log.go:172] (0xc00088ad10) (0xc00070c320) Stream added, broadcasting: 1\nI0514 11:25:23.264701 1060 log.go:172] (0xc00088ad10) Reply frame received for 1\nI0514 11:25:23.264750 1060 log.go:172] (0xc00088ad10) (0xc00070c3c0) Create stream\nI0514 11:25:23.264762 1060 log.go:172] (0xc00088ad10) (0xc00070c3c0) Stream added, broadcasting: 3\nI0514 11:25:23.265794 1060 log.go:172] (0xc00088ad10) Reply frame received for 3\nI0514 11:25:23.265824 1060 log.go:172] (0xc00088ad10) (0xc0007ae000) Create stream\nI0514 11:25:23.265836 1060 log.go:172] (0xc00088ad10) (0xc0007ae000) Stream added, broadcasting: 5\nI0514 11:25:23.266918 1060 log.go:172] (0xc00088ad10) Reply frame received for 5\nI0514 11:25:23.319579 1060 log.go:172] (0xc00088ad10) Data frame received for 3\nI0514 11:25:23.319615 1060 log.go:172] (0xc00070c3c0) (3) Data frame handling\nI0514 11:25:23.319635 1060 log.go:172] (0xc00088ad10) Data frame received for 5\nI0514 11:25:23.319645 1060 log.go:172] (0xc0007ae000) (5) Data frame handling\nI0514 11:25:23.319656 1060 log.go:172] (0xc0007ae000) (5) Data frame sent\nI0514 11:25:23.319665 1060 log.go:172] (0xc00088ad10) Data frame received for 5\nI0514 11:25:23.319672 1060 log.go:172] (0xc0007ae000) (5) Data frame handling\n+ nc -zv -t -w 2 10.98.152.36 80\nConnection to 10.98.152.36 80 port [tcp/http] succeeded!\nI0514 11:25:23.320994 1060 log.go:172] (0xc00088ad10) Data frame received for 1\nI0514 11:25:23.321022 1060 log.go:172] (0xc00070c320) (1) Data frame handling\nI0514 11:25:23.321053 1060 log.go:172] (0xc00070c320) (1) Data frame sent\nI0514 11:25:23.321074 1060 log.go:172] (0xc00088ad10) (0xc00070c320) Stream removed, broadcasting: 1\nI0514 11:25:23.321100 1060 log.go:172] (0xc00088ad10) Go away received\nI0514 11:25:23.321490 1060 log.go:172] (0xc00088ad10) (0xc00070c320) Stream removed, broadcasting: 1\nI0514 11:25:23.321504 1060 log.go:172] (0xc00088ad10) (0xc00070c3c0) Stream removed, broadcasting: 3\nI0514 11:25:23.321512 1060 log.go:172] (0xc00088ad10) (0xc0007ae000) Stream removed, broadcasting: 5\n" May 14 11:25:23.325: INFO: stdout: "" May 14 11:25:23.325: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-9533 execpodphv92 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.15 32452' May 14 11:25:23.505: INFO: stderr: "I0514 11:25:23.451751 1079 log.go:172] (0xc000b5fad0) (0xc0009b4a00) Create stream\nI0514 11:25:23.451816 1079 log.go:172] (0xc000b5fad0) (0xc0009b4a00) Stream added, broadcasting: 1\nI0514 11:25:23.456255 1079 log.go:172] (0xc000b5fad0) Reply frame received for 1\nI0514 11:25:23.456315 1079 log.go:172] (0xc000b5fad0) (0xc0005ab540) Create stream\nI0514 11:25:23.456337 1079 log.go:172] (0xc000b5fad0) (0xc0005ab540) Stream added, broadcasting: 3\nI0514 11:25:23.457316 1079 log.go:172] (0xc000b5fad0) Reply frame received for 3\nI0514 11:25:23.457348 1079 log.go:172] (0xc000b5fad0) (0xc00041a960) Create stream\nI0514 11:25:23.457358 1079 log.go:172] (0xc000b5fad0) (0xc00041a960) Stream added, broadcasting: 5\nI0514 11:25:23.458141 1079 log.go:172] (0xc000b5fad0) Reply frame received for 5\nI0514 11:25:23.499946 1079 log.go:172] (0xc000b5fad0) Data frame received for 5\nI0514 11:25:23.499991 1079 log.go:172] (0xc00041a960) (5) Data frame handling\nI0514 11:25:23.500016 1079 log.go:172] (0xc00041a960) (5) Data frame sent\nI0514 11:25:23.500033 1079 log.go:172] (0xc000b5fad0) Data frame received for 5\nI0514 11:25:23.500048 1079 log.go:172] (0xc00041a960) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.15 32452\nConnection to 172.17.0.15 32452 port [tcp/32452] succeeded!\nI0514 11:25:23.500079 1079 log.go:172] (0xc00041a960) (5) Data frame sent\nI0514 11:25:23.500159 1079 log.go:172] (0xc000b5fad0) Data frame received for 5\nI0514 11:25:23.500243 1079 log.go:172] (0xc00041a960) (5) Data frame handling\nI0514 11:25:23.500341 1079 log.go:172] (0xc000b5fad0) Data frame received for 3\nI0514 11:25:23.500366 1079 log.go:172] (0xc0005ab540) (3) Data frame handling\nI0514 11:25:23.502052 1079 log.go:172] (0xc000b5fad0) Data frame received for 1\nI0514 11:25:23.502073 1079 log.go:172] (0xc0009b4a00) (1) Data frame handling\nI0514 11:25:23.502091 1079 log.go:172] (0xc0009b4a00) (1) Data frame sent\nI0514 11:25:23.502118 1079 log.go:172] (0xc000b5fad0) (0xc0009b4a00) Stream removed, broadcasting: 1\nI0514 11:25:23.502223 1079 log.go:172] (0xc000b5fad0) Go away received\nI0514 11:25:23.502463 1079 log.go:172] (0xc000b5fad0) (0xc0009b4a00) Stream removed, broadcasting: 1\nI0514 11:25:23.502479 1079 log.go:172] (0xc000b5fad0) (0xc0005ab540) Stream removed, broadcasting: 3\nI0514 11:25:23.502487 1079 log.go:172] (0xc000b5fad0) (0xc00041a960) Stream removed, broadcasting: 5\n" May 14 11:25:23.505: INFO: stdout: "" May 14 11:25:23.505: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-9533 execpodphv92 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 32452' May 14 11:25:23.699: INFO: stderr: "I0514 11:25:23.621747 1099 log.go:172] (0xc000718a50) (0xc0005bf360) Create stream\nI0514 11:25:23.621786 1099 log.go:172] (0xc000718a50) (0xc0005bf360) Stream added, broadcasting: 1\nI0514 11:25:23.623842 1099 log.go:172] (0xc000718a50) Reply frame received for 1\nI0514 11:25:23.623865 1099 log.go:172] (0xc000718a50) (0xc00043a000) Create stream\nI0514 11:25:23.623873 1099 log.go:172] (0xc000718a50) (0xc00043a000) Stream added, broadcasting: 3\nI0514 11:25:23.624629 1099 log.go:172] (0xc000718a50) Reply frame received for 3\nI0514 11:25:23.624642 1099 log.go:172] (0xc000718a50) (0xc00043a0a0) Create stream\nI0514 11:25:23.624648 1099 log.go:172] (0xc000718a50) (0xc00043a0a0) Stream added, broadcasting: 5\nI0514 11:25:23.625780 1099 log.go:172] (0xc000718a50) Reply frame received for 5\nI0514 11:25:23.692736 1099 log.go:172] (0xc000718a50) Data frame received for 5\nI0514 11:25:23.692768 1099 log.go:172] (0xc00043a0a0) (5) Data frame handling\nI0514 11:25:23.692798 1099 log.go:172] (0xc00043a0a0) (5) Data frame sent\nI0514 11:25:23.692812 1099 log.go:172] (0xc000718a50) Data frame received for 5\nI0514 11:25:23.692824 1099 log.go:172] (0xc00043a0a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.18 32452\nConnection to 172.17.0.18 32452 port [tcp/32452] succeeded!\nI0514 11:25:23.692879 1099 log.go:172] (0xc00043a0a0) (5) Data frame sent\nI0514 11:25:23.692943 1099 log.go:172] (0xc000718a50) Data frame received for 3\nI0514 11:25:23.692964 1099 log.go:172] (0xc00043a000) (3) Data frame handling\nI0514 11:25:23.693566 1099 log.go:172] (0xc000718a50) Data frame received for 5\nI0514 11:25:23.693595 1099 log.go:172] (0xc00043a0a0) (5) Data frame handling\nI0514 11:25:23.695050 1099 log.go:172] (0xc000718a50) Data frame received for 1\nI0514 11:25:23.695072 1099 log.go:172] (0xc0005bf360) (1) Data frame handling\nI0514 11:25:23.695100 1099 log.go:172] (0xc0005bf360) (1) Data frame sent\nI0514 11:25:23.695132 1099 log.go:172] (0xc000718a50) (0xc0005bf360) Stream removed, broadcasting: 1\nI0514 11:25:23.695296 1099 log.go:172] (0xc000718a50) Go away received\nI0514 11:25:23.695556 1099 log.go:172] (0xc000718a50) (0xc0005bf360) Stream removed, broadcasting: 1\nI0514 11:25:23.695590 1099 log.go:172] (0xc000718a50) (0xc00043a000) Stream removed, broadcasting: 3\nI0514 11:25:23.695605 1099 log.go:172] (0xc000718a50) (0xc00043a0a0) Stream removed, broadcasting: 5\n" May 14 11:25:23.699: INFO: stdout: "" May 14 11:25:23.699: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:25:23.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9533" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:20.273 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":91,"skipped":1264,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:25:23.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium May 14 11:25:23.813: INFO: Waiting up to 5m0s for pod "pod-0305ed3c-e277-4581-a685-bf2e5c876671" in namespace "emptydir-5570" to be "Succeeded or Failed" May 14 11:25:23.837: INFO: Pod "pod-0305ed3c-e277-4581-a685-bf2e5c876671": Phase="Pending", Reason="", readiness=false. Elapsed: 24.017344ms May 14 11:25:25.841: INFO: Pod "pod-0305ed3c-e277-4581-a685-bf2e5c876671": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027667977s May 14 11:25:27.844: INFO: Pod "pod-0305ed3c-e277-4581-a685-bf2e5c876671": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031052985s May 14 11:25:29.850: INFO: Pod "pod-0305ed3c-e277-4581-a685-bf2e5c876671": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037180579s May 14 11:25:32.037: INFO: Pod "pod-0305ed3c-e277-4581-a685-bf2e5c876671": Phase="Pending", Reason="", readiness=false. Elapsed: 8.224161713s May 14 11:25:34.412: INFO: Pod "pod-0305ed3c-e277-4581-a685-bf2e5c876671": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.598415591s STEP: Saw pod success May 14 11:25:34.412: INFO: Pod "pod-0305ed3c-e277-4581-a685-bf2e5c876671" satisfied condition "Succeeded or Failed" May 14 11:25:34.415: INFO: Trying to get logs from node kali-worker2 pod pod-0305ed3c-e277-4581-a685-bf2e5c876671 container test-container: STEP: delete the pod May 14 11:25:36.130: INFO: Waiting for pod pod-0305ed3c-e277-4581-a685-bf2e5c876671 to disappear May 14 11:25:36.246: INFO: Pod pod-0305ed3c-e277-4581-a685-bf2e5c876671 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:25:36.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5570" for this suite. • [SLOW TEST:12.566 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":92,"skipped":1273,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:25:36.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0514 11:25:44.529713 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 14 11:25:44.529: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:25:44.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1433" for this suite. • [SLOW TEST:8.933 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":93,"skipped":1284,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:25:45.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-7e3677a1-b502-4fdf-9db8-f7e9c9688633 STEP: Creating a pod to test consume configMaps May 14 11:25:47.387: INFO: Waiting up to 5m0s for pod "pod-configmaps-b7d58ce8-617e-4233-a105-44e39d58ba65" in namespace "configmap-1302" to be "Succeeded or Failed" May 14 11:25:47.439: INFO: Pod "pod-configmaps-b7d58ce8-617e-4233-a105-44e39d58ba65": Phase="Pending", Reason="", readiness=false. Elapsed: 51.920096ms May 14 11:25:49.497: INFO: Pod "pod-configmaps-b7d58ce8-617e-4233-a105-44e39d58ba65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110106228s May 14 11:25:52.589: INFO: Pod "pod-configmaps-b7d58ce8-617e-4233-a105-44e39d58ba65": Phase="Pending", Reason="", readiness=false. Elapsed: 5.201920132s May 14 11:25:54.707: INFO: Pod "pod-configmaps-b7d58ce8-617e-4233-a105-44e39d58ba65": Phase="Pending", Reason="", readiness=false. Elapsed: 7.319779814s May 14 11:25:56.711: INFO: Pod "pod-configmaps-b7d58ce8-617e-4233-a105-44e39d58ba65": Phase="Pending", Reason="", readiness=false. Elapsed: 9.323597454s May 14 11:25:59.160: INFO: Pod "pod-configmaps-b7d58ce8-617e-4233-a105-44e39d58ba65": Phase="Pending", Reason="", readiness=false. Elapsed: 11.772678839s May 14 11:26:01.215: INFO: Pod "pod-configmaps-b7d58ce8-617e-4233-a105-44e39d58ba65": Phase="Pending", Reason="", readiness=false. Elapsed: 13.828141188s May 14 11:26:03.218: INFO: Pod "pod-configmaps-b7d58ce8-617e-4233-a105-44e39d58ba65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.830925193s STEP: Saw pod success May 14 11:26:03.218: INFO: Pod "pod-configmaps-b7d58ce8-617e-4233-a105-44e39d58ba65" satisfied condition "Succeeded or Failed" May 14 11:26:03.220: INFO: Trying to get logs from node kali-worker pod pod-configmaps-b7d58ce8-617e-4233-a105-44e39d58ba65 container configmap-volume-test: STEP: delete the pod May 14 11:26:03.295: INFO: Waiting for pod pod-configmaps-b7d58ce8-617e-4233-a105-44e39d58ba65 to disappear May 14 11:26:03.309: INFO: Pod pod-configmaps-b7d58ce8-617e-4233-a105-44e39d58ba65 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:26:03.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1302" for this suite. • [SLOW TEST:18.048 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":94,"skipped":1291,"failed":0} SSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:26:03.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod May 14 11:26:03.423: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:26:10.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6919" for this suite. • [SLOW TEST:6.956 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":95,"skipped":1297,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:26:10.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0514 11:26:50.842617 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 14 11:26:50.842: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:26:50.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6211" for this suite. • [SLOW TEST:40.616 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":96,"skipped":1309,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:26:50.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with configMap that has name projected-configmap-test-upd-4c88a2d3-88e3-4899-a08d-2cb12518b34d STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-4c88a2d3-88e3-4899-a08d-2cb12518b34d STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:27:15.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1315" for this suite. • [SLOW TEST:24.928 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":97,"skipped":1315,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:27:15.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-3410 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating statefulset ss in namespace statefulset-3410 May 14 11:27:18.183: INFO: Found 0 stateful pods, waiting for 1 May 14 11:27:28.185: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 May 14 11:27:28.323: INFO: Deleting all statefulset in ns statefulset-3410 May 14 11:27:28.365: INFO: Scaling statefulset ss to 0 May 14 11:27:48.567: INFO: Waiting for statefulset status.replicas updated to 0 May 14 11:27:48.569: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:27:48.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3410" for this suite. • [SLOW TEST:32.773 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":98,"skipped":1333,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:27:48.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 14 11:27:58.726: INFO: &Pod{ObjectMeta:{send-events-f357a211-11c8-4147-8a2b-c841820d25f9 events-6664 /api/v1/namespaces/events-6664/pods/send-events-f357a211-11c8-4147-8a2b-c841820d25f9 42f527a4-b4eb-475a-88d1-7119bca62045 4275673 0 2020-05-14 11:27:48 +0000 UTC map[name:foo time:630461966] map[] [] [] [{e2e.test Update v1 2020-05-14 11:27:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 116 105 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 112 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 114 116 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 99 111 110 116 97 105 110 101 114 80 111 114 116 92 34 58 56 48 44 92 34 112 114 111 116 111 99 111 108 92 34 58 92 34 84 67 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 80 111 114 116 34 58 123 125 44 34 102 58 112 114 111 116 111 99 111 108 34 58 123 125 125 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-14 11:27:56 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sxq8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sxq8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sxq8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:27:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:27:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:27:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:27:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.16,StartTime:2020-05-14 11:27:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-14 11:27:56 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://a2bc637969abf5dc7e3cd1d435f59a8166d37103b389f071cff6cb29779b4e46,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.16,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 14 11:28:00.729: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 14 11:28:02.749: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:28:02.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6664" for this suite. • [SLOW TEST:14.239 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":275,"completed":99,"skipped":1356,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:28:02.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 14 11:28:03.602: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 14 11:28:05.610: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052483, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052483, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052483, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052483, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 11:28:07.792: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052483, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052483, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052483, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052483, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 11:28:10.156: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052483, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052483, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052483, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052483, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 11:28:11.614: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052483, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052483, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052483, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052483, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 11:28:13.893: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052483, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052483, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052483, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052483, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 11:28:15.612: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052483, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052483, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052483, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052483, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 14 11:28:18.635: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 14 11:28:18.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6831-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:28:20.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9897" for this suite. STEP: Destroying namespace "webhook-9897-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.556 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":100,"skipped":1382,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:28:20.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 14 11:28:20.513: INFO: Waiting up to 5m0s for pod "downwardapi-volume-53b632ce-001a-414b-99c5-ffbfc59a2129" in namespace "downward-api-1301" to be "Succeeded or Failed" May 14 11:28:20.516: INFO: Pod "downwardapi-volume-53b632ce-001a-414b-99c5-ffbfc59a2129": Phase="Pending", Reason="", readiness=false. Elapsed: 2.667328ms May 14 11:28:23.349: INFO: Pod "downwardapi-volume-53b632ce-001a-414b-99c5-ffbfc59a2129": Phase="Pending", Reason="", readiness=false. Elapsed: 2.836156727s May 14 11:28:25.377: INFO: Pod "downwardapi-volume-53b632ce-001a-414b-99c5-ffbfc59a2129": Phase="Pending", Reason="", readiness=false. Elapsed: 4.864455752s May 14 11:28:27.569: INFO: Pod "downwardapi-volume-53b632ce-001a-414b-99c5-ffbfc59a2129": Phase="Pending", Reason="", readiness=false. Elapsed: 7.056440622s May 14 11:28:29.573: INFO: Pod "downwardapi-volume-53b632ce-001a-414b-99c5-ffbfc59a2129": Phase="Pending", Reason="", readiness=false. Elapsed: 9.060211021s May 14 11:28:31.576: INFO: Pod "downwardapi-volume-53b632ce-001a-414b-99c5-ffbfc59a2129": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.063455695s STEP: Saw pod success May 14 11:28:31.576: INFO: Pod "downwardapi-volume-53b632ce-001a-414b-99c5-ffbfc59a2129" satisfied condition "Succeeded or Failed" May 14 11:28:31.579: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-53b632ce-001a-414b-99c5-ffbfc59a2129 container client-container: STEP: delete the pod May 14 11:28:31.660: INFO: Waiting for pod downwardapi-volume-53b632ce-001a-414b-99c5-ffbfc59a2129 to disappear May 14 11:28:31.672: INFO: Pod downwardapi-volume-53b632ce-001a-414b-99c5-ffbfc59a2129 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:28:31.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1301" for this suite. • [SLOW TEST:11.318 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":101,"skipped":1388,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:28:31.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-projected-5s4t STEP: Creating a pod to test atomic-volume-subpath May 14 11:28:31.876: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-5s4t" in namespace "subpath-4998" to be "Succeeded or Failed" May 14 11:28:32.474: INFO: Pod "pod-subpath-test-projected-5s4t": Phase="Pending", Reason="", readiness=false. Elapsed: 598.067327ms May 14 11:28:34.498: INFO: Pod "pod-subpath-test-projected-5s4t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.621717174s May 14 11:28:36.500: INFO: Pod "pod-subpath-test-projected-5s4t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.624165628s May 14 11:28:38.503: INFO: Pod "pod-subpath-test-projected-5s4t": Phase="Running", Reason="", readiness=true. Elapsed: 6.627194566s May 14 11:28:40.511: INFO: Pod "pod-subpath-test-projected-5s4t": Phase="Running", Reason="", readiness=true. Elapsed: 8.634656554s May 14 11:28:42.514: INFO: Pod "pod-subpath-test-projected-5s4t": Phase="Running", Reason="", readiness=true. Elapsed: 10.638459446s May 14 11:28:44.519: INFO: Pod "pod-subpath-test-projected-5s4t": Phase="Running", Reason="", readiness=true. Elapsed: 12.643157706s May 14 11:28:46.524: INFO: Pod "pod-subpath-test-projected-5s4t": Phase="Running", Reason="", readiness=true. Elapsed: 14.6480148s May 14 11:28:48.529: INFO: Pod "pod-subpath-test-projected-5s4t": Phase="Running", Reason="", readiness=true. Elapsed: 16.653066366s May 14 11:28:50.809: INFO: Pod "pod-subpath-test-projected-5s4t": Phase="Running", Reason="", readiness=true. Elapsed: 18.933244642s May 14 11:28:52.813: INFO: Pod "pod-subpath-test-projected-5s4t": Phase="Running", Reason="", readiness=true. Elapsed: 20.936711028s May 14 11:28:54.816: INFO: Pod "pod-subpath-test-projected-5s4t": Phase="Running", Reason="", readiness=true. Elapsed: 22.939911672s May 14 11:28:56.845: INFO: Pod "pod-subpath-test-projected-5s4t": Phase="Running", Reason="", readiness=true. Elapsed: 24.968913352s May 14 11:28:58.849: INFO: Pod "pod-subpath-test-projected-5s4t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.973545523s STEP: Saw pod success May 14 11:28:58.849: INFO: Pod "pod-subpath-test-projected-5s4t" satisfied condition "Succeeded or Failed" May 14 11:28:58.853: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-projected-5s4t container test-container-subpath-projected-5s4t: STEP: delete the pod May 14 11:28:58.889: INFO: Waiting for pod pod-subpath-test-projected-5s4t to disappear May 14 11:28:58.900: INFO: Pod pod-subpath-test-projected-5s4t no longer exists STEP: Deleting pod pod-subpath-test-projected-5s4t May 14 11:28:58.900: INFO: Deleting pod "pod-subpath-test-projected-5s4t" in namespace "subpath-4998" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:28:58.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4998" for this suite. • [SLOW TEST:27.205 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":102,"skipped":1449,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:28:58.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 14 11:28:58.960: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:29:02.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8549" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":103,"skipped":1457,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:29:03.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 14 11:29:07.160: INFO: Waiting up to 5m0s for pod "client-envvars-a3874fec-9a1c-477c-830c-357029c4de41" in namespace "pods-9660" to be "Succeeded or Failed" May 14 11:29:07.177: INFO: Pod "client-envvars-a3874fec-9a1c-477c-830c-357029c4de41": Phase="Pending", Reason="", readiness=false. Elapsed: 16.534228ms May 14 11:29:09.181: INFO: Pod "client-envvars-a3874fec-9a1c-477c-830c-357029c4de41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020839551s May 14 11:29:11.378: INFO: Pod "client-envvars-a3874fec-9a1c-477c-830c-357029c4de41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.218029674s STEP: Saw pod success May 14 11:29:11.378: INFO: Pod "client-envvars-a3874fec-9a1c-477c-830c-357029c4de41" satisfied condition "Succeeded or Failed" May 14 11:29:11.381: INFO: Trying to get logs from node kali-worker pod client-envvars-a3874fec-9a1c-477c-830c-357029c4de41 container env3cont: STEP: delete the pod May 14 11:29:11.721: INFO: Waiting for pod client-envvars-a3874fec-9a1c-477c-830c-357029c4de41 to disappear May 14 11:29:11.791: INFO: Pod client-envvars-a3874fec-9a1c-477c-830c-357029c4de41 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:29:11.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9660" for this suite. • [SLOW TEST:8.795 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":104,"skipped":1487,"failed":0} SSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:29:11.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:29:11.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-618" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":105,"skipped":1494,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:29:12.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:29:16.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-303" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":106,"skipped":1514,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:29:16.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-2363 STEP: creating a selector STEP: Creating the service pods in kubernetes May 14 11:29:16.297: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 14 11:29:16.353: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 14 11:29:18.356: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 14 11:29:20.402: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 14 11:29:22.363: INFO: The status of Pod netserver-0 is Running (Ready = false) May 14 11:29:24.356: INFO: The status of Pod netserver-0 is Running (Ready = false) May 14 11:29:26.378: INFO: The status of Pod netserver-0 is Running (Ready = false) May 14 11:29:28.372: INFO: The status of Pod netserver-0 is Running (Ready = false) May 14 11:29:30.356: INFO: The status of Pod netserver-0 is Running (Ready = false) May 14 11:29:32.357: INFO: The status of Pod netserver-0 is Running (Ready = false) May 14 11:29:34.358: INFO: The status of Pod netserver-0 is Running (Ready = false) May 14 11:29:36.358: INFO: The status of Pod netserver-0 is Running (Ready = true) May 14 11:29:36.365: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 14 11:29:42.458: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.142:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2363 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 11:29:42.458: INFO: >>> kubeConfig: /root/.kube/config I0514 11:29:42.484146 7 log.go:172] (0xc0027f6000) (0xc001221720) Create stream I0514 11:29:42.484176 7 log.go:172] (0xc0027f6000) (0xc001221720) Stream added, broadcasting: 1 I0514 11:29:42.485803 7 log.go:172] (0xc0027f6000) Reply frame received for 1 I0514 11:29:42.485832 7 log.go:172] (0xc0027f6000) (0xc002a20000) Create stream I0514 11:29:42.485841 7 log.go:172] (0xc0027f6000) (0xc002a20000) Stream added, broadcasting: 3 I0514 11:29:42.486857 7 log.go:172] (0xc0027f6000) Reply frame received for 3 I0514 11:29:42.486912 7 log.go:172] (0xc0027f6000) (0xc002a200a0) Create stream I0514 11:29:42.486935 7 log.go:172] (0xc0027f6000) (0xc002a200a0) Stream added, broadcasting: 5 I0514 11:29:42.487747 7 log.go:172] (0xc0027f6000) Reply frame received for 5 I0514 11:29:42.623606 7 log.go:172] (0xc0027f6000) Data frame received for 3 I0514 11:29:42.623642 7 log.go:172] (0xc002a20000) (3) Data frame handling I0514 11:29:42.623670 7 log.go:172] (0xc002a20000) (3) Data frame sent I0514 11:29:42.623747 7 log.go:172] (0xc0027f6000) Data frame received for 3 I0514 11:29:42.623776 7 log.go:172] (0xc002a20000) (3) Data frame handling I0514 11:29:42.623796 7 log.go:172] (0xc0027f6000) Data frame received for 5 I0514 11:29:42.623808 7 log.go:172] (0xc002a200a0) (5) Data frame handling I0514 11:29:42.626778 7 log.go:172] (0xc0027f6000) Data frame received for 1 I0514 11:29:42.626805 7 log.go:172] (0xc001221720) (1) Data frame handling I0514 11:29:42.626821 7 log.go:172] (0xc001221720) (1) Data frame sent I0514 11:29:42.626833 7 log.go:172] (0xc0027f6000) (0xc001221720) Stream removed, broadcasting: 1 I0514 11:29:42.626847 7 log.go:172] (0xc0027f6000) Go away received I0514 11:29:42.627015 7 log.go:172] (0xc0027f6000) (0xc001221720) Stream removed, broadcasting: 1 I0514 11:29:42.627034 7 log.go:172] (0xc0027f6000) (0xc002a20000) Stream removed, broadcasting: 3 I0514 11:29:42.627068 7 log.go:172] (0xc0027f6000) (0xc002a200a0) Stream removed, broadcasting: 5 May 14 11:29:42.627: INFO: Found all expected endpoints: [netserver-0] May 14 11:29:42.629: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.21:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2363 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 11:29:42.629: INFO: >>> kubeConfig: /root/.kube/config I0514 11:29:42.659056 7 log.go:172] (0xc0020d7ad0) (0xc001221c20) Create stream I0514 11:29:42.659088 7 log.go:172] (0xc0020d7ad0) (0xc001221c20) Stream added, broadcasting: 1 I0514 11:29:42.660825 7 log.go:172] (0xc0020d7ad0) Reply frame received for 1 I0514 11:29:42.660864 7 log.go:172] (0xc0020d7ad0) (0xc001ae7220) Create stream I0514 11:29:42.660878 7 log.go:172] (0xc0020d7ad0) (0xc001ae7220) Stream added, broadcasting: 3 I0514 11:29:42.662089 7 log.go:172] (0xc0020d7ad0) Reply frame received for 3 I0514 11:29:42.662119 7 log.go:172] (0xc0020d7ad0) (0xc000fea460) Create stream I0514 11:29:42.662128 7 log.go:172] (0xc0020d7ad0) (0xc000fea460) Stream added, broadcasting: 5 I0514 11:29:42.663056 7 log.go:172] (0xc0020d7ad0) Reply frame received for 5 I0514 11:29:42.724188 7 log.go:172] (0xc0020d7ad0) Data frame received for 3 I0514 11:29:42.724221 7 log.go:172] (0xc001ae7220) (3) Data frame handling I0514 11:29:42.724229 7 log.go:172] (0xc001ae7220) (3) Data frame sent I0514 11:29:42.724235 7 log.go:172] (0xc0020d7ad0) Data frame received for 3 I0514 11:29:42.724246 7 log.go:172] (0xc001ae7220) (3) Data frame handling I0514 11:29:42.724269 7 log.go:172] (0xc0020d7ad0) Data frame received for 5 I0514 11:29:42.724279 7 log.go:172] (0xc000fea460) (5) Data frame handling I0514 11:29:42.725592 7 log.go:172] (0xc0020d7ad0) Data frame received for 1 I0514 11:29:42.725620 7 log.go:172] (0xc001221c20) (1) Data frame handling I0514 11:29:42.725651 7 log.go:172] (0xc001221c20) (1) Data frame sent I0514 11:29:42.725679 7 log.go:172] (0xc0020d7ad0) (0xc001221c20) Stream removed, broadcasting: 1 I0514 11:29:42.725698 7 log.go:172] (0xc0020d7ad0) Go away received I0514 11:29:42.725792 7 log.go:172] (0xc0020d7ad0) (0xc001221c20) Stream removed, broadcasting: 1 I0514 11:29:42.725805 7 log.go:172] (0xc0020d7ad0) (0xc001ae7220) Stream removed, broadcasting: 3 I0514 11:29:42.725812 7 log.go:172] (0xc0020d7ad0) (0xc000fea460) Stream removed, broadcasting: 5 May 14 11:29:42.725: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:29:42.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2363" for this suite. • [SLOW TEST:26.519 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":107,"skipped":1569,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:29:42.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:29:42.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1076" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":275,"completed":108,"skipped":1600,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:29:42.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 14 11:29:42.896: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 14 11:29:47.902: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 14 11:29:47.902: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 May 14 11:29:53.960: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-6009 /apis/apps/v1/namespaces/deployment-6009/deployments/test-cleanup-deployment b71ff785-ba42-4b48-9b07-2cc9d22f7f10 4276375 1 2020-05-14 11:29:47 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2020-05-14 11:29:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-14 11:29:52 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004908d48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-14 11:29:48 +0000 UTC,LastTransitionTime:2020-05-14 11:29:48 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-b4867b47f" has successfully progressed.,LastUpdateTime:2020-05-14 11:29:52 +0000 UTC,LastTransitionTime:2020-05-14 11:29:47 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 14 11:29:53.969: INFO: New ReplicaSet "test-cleanup-deployment-b4867b47f" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-b4867b47f deployment-6009 /apis/apps/v1/namespaces/deployment-6009/replicasets/test-cleanup-deployment-b4867b47f 92d9e599-2aee-4e67-a3ab-97232c0e06c8 4276361 1 2020-05-14 11:29:47 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment b71ff785-ba42-4b48-9b07-2cc9d22f7f10 0xc0049091a0 0xc0049091a1}] [] [{kube-controller-manager Update apps/v1 2020-05-14 11:29:51 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 55 49 102 102 55 56 53 45 98 97 52 50 45 52 98 52 56 45 57 98 48 55 45 50 99 99 57 100 50 50 102 55 102 49 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: b4867b47f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004909218 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 14 11:29:54.025: INFO: Pod "test-cleanup-deployment-b4867b47f-zqkdv" is available: &Pod{ObjectMeta:{test-cleanup-deployment-b4867b47f-zqkdv test-cleanup-deployment-b4867b47f- deployment-6009 /api/v1/namespaces/deployment-6009/pods/test-cleanup-deployment-b4867b47f-zqkdv bb0e3db6-fa66-414f-987e-0281843d46dd 4276360 0 2020-05-14 11:29:47 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-b4867b47f 92d9e599-2aee-4e67-a3ab-97232c0e06c8 0xc00548db80 0xc00548db81}] [] [{kube-controller-manager Update v1 2020-05-14 11:29:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 50 100 57 101 53 57 57 45 50 97 101 101 45 52 101 54 55 45 97 51 97 98 45 57 55 50 51 50 99 48 101 48 54 99 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-14 11:29:51 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v9tlh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v9tlh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v9tlh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:29:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:29:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:29:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:29:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.23,StartTime:2020-05-14 11:29:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-14 11:29:51 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://7ea21ed900508fa7f0bf63019b75aa8ee306168d8a84ebf859ca5ca0bbfe9779,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.23,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:29:54.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6009" for this suite. • [SLOW TEST:11.201 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":109,"skipped":1638,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:29:54.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs May 14 11:29:54.159: INFO: Waiting up to 5m0s for pod "pod-e79aa134-15aa-4f4a-a805-49f9b9571e99" in namespace "emptydir-8915" to be "Succeeded or Failed" May 14 11:29:54.167: INFO: Pod "pod-e79aa134-15aa-4f4a-a805-49f9b9571e99": Phase="Pending", Reason="", readiness=false. Elapsed: 7.275066ms May 14 11:29:56.169: INFO: Pod "pod-e79aa134-15aa-4f4a-a805-49f9b9571e99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010050951s May 14 11:29:58.229: INFO: Pod "pod-e79aa134-15aa-4f4a-a805-49f9b9571e99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069510381s STEP: Saw pod success May 14 11:29:58.229: INFO: Pod "pod-e79aa134-15aa-4f4a-a805-49f9b9571e99" satisfied condition "Succeeded or Failed" May 14 11:29:58.232: INFO: Trying to get logs from node kali-worker pod pod-e79aa134-15aa-4f4a-a805-49f9b9571e99 container test-container: STEP: delete the pod May 14 11:29:58.534: INFO: Waiting for pod pod-e79aa134-15aa-4f4a-a805-49f9b9571e99 to disappear May 14 11:29:58.666: INFO: Pod pod-e79aa134-15aa-4f4a-a805-49f9b9571e99 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:29:58.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8915" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":110,"skipped":1639,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:29:58.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 14 11:29:58.737: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fefcdee0-e58a-473c-909a-8d0a6302db27" in namespace "projected-6860" to be "Succeeded or Failed" May 14 11:29:58.815: INFO: Pod "downwardapi-volume-fefcdee0-e58a-473c-909a-8d0a6302db27": Phase="Pending", Reason="", readiness=false. Elapsed: 78.199856ms May 14 11:30:00.909: INFO: Pod "downwardapi-volume-fefcdee0-e58a-473c-909a-8d0a6302db27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172170821s May 14 11:30:02.913: INFO: Pod "downwardapi-volume-fefcdee0-e58a-473c-909a-8d0a6302db27": Phase="Running", Reason="", readiness=true. Elapsed: 4.175942201s May 14 11:30:04.922: INFO: Pod "downwardapi-volume-fefcdee0-e58a-473c-909a-8d0a6302db27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.185179869s STEP: Saw pod success May 14 11:30:04.922: INFO: Pod "downwardapi-volume-fefcdee0-e58a-473c-909a-8d0a6302db27" satisfied condition "Succeeded or Failed" May 14 11:30:04.924: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-fefcdee0-e58a-473c-909a-8d0a6302db27 container client-container: STEP: delete the pod May 14 11:30:05.014: INFO: Waiting for pod downwardapi-volume-fefcdee0-e58a-473c-909a-8d0a6302db27 to disappear May 14 11:30:05.022: INFO: Pod downwardapi-volume-fefcdee0-e58a-473c-909a-8d0a6302db27 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:30:05.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6860" for this suite. • [SLOW TEST:6.374 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":111,"skipped":1669,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:30:05.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 14 11:30:05.132: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-4a95347a-5fad-4238-9c7e-24e0e1483651" in namespace "security-context-test-5344" to be "Succeeded or Failed" May 14 11:30:05.143: INFO: Pod "busybox-readonly-false-4a95347a-5fad-4238-9c7e-24e0e1483651": Phase="Pending", Reason="", readiness=false. Elapsed: 10.29086ms May 14 11:30:07.147: INFO: Pod "busybox-readonly-false-4a95347a-5fad-4238-9c7e-24e0e1483651": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015153697s May 14 11:30:09.150: INFO: Pod "busybox-readonly-false-4a95347a-5fad-4238-9c7e-24e0e1483651": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017944214s May 14 11:30:09.150: INFO: Pod "busybox-readonly-false-4a95347a-5fad-4238-9c7e-24e0e1483651" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:30:09.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5344" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":112,"skipped":1690,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:30:09.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 14 11:30:09.667: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c38be965-ed25-49a9-b779-d47c07570581" in namespace "downward-api-3006" to be "Succeeded or Failed" May 14 11:30:09.676: INFO: Pod "downwardapi-volume-c38be965-ed25-49a9-b779-d47c07570581": Phase="Pending", Reason="", readiness=false. Elapsed: 8.926012ms May 14 11:30:12.019: INFO: Pod "downwardapi-volume-c38be965-ed25-49a9-b779-d47c07570581": Phase="Pending", Reason="", readiness=false. Elapsed: 2.35237762s May 14 11:30:14.024: INFO: Pod "downwardapi-volume-c38be965-ed25-49a9-b779-d47c07570581": Phase="Running", Reason="", readiness=true. Elapsed: 4.356968823s May 14 11:30:16.028: INFO: Pod "downwardapi-volume-c38be965-ed25-49a9-b779-d47c07570581": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.36100589s STEP: Saw pod success May 14 11:30:16.028: INFO: Pod "downwardapi-volume-c38be965-ed25-49a9-b779-d47c07570581" satisfied condition "Succeeded or Failed" May 14 11:30:16.030: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-c38be965-ed25-49a9-b779-d47c07570581 container client-container: STEP: delete the pod May 14 11:30:16.064: INFO: Waiting for pod downwardapi-volume-c38be965-ed25-49a9-b779-d47c07570581 to disappear May 14 11:30:16.076: INFO: Pod downwardapi-volume-c38be965-ed25-49a9-b779-d47c07570581 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:30:16.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3006" for this suite. • [SLOW TEST:6.906 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":113,"skipped":1715,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:30:16.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-5a54b2f8-a5a6-42d9-9f77-ecc667fd2c8b STEP: Creating secret with name s-test-opt-upd-99861e6f-b5f6-49ea-8244-a7ee37d2770e STEP: Creating the pod STEP: Deleting secret s-test-opt-del-5a54b2f8-a5a6-42d9-9f77-ecc667fd2c8b STEP: Updating secret s-test-opt-upd-99861e6f-b5f6-49ea-8244-a7ee37d2770e STEP: Creating secret with name s-test-opt-create-c9224f95-bce6-4200-a559-633e5dd08370 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:30:26.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4220" for this suite. • [SLOW TEST:10.248 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":114,"skipped":1753,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:30:26.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting the proxy server May 14 11:30:26.400: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:30:26.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4443" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":275,"completed":115,"skipped":1818,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:30:26.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating secret secrets-6098/secret-test-90b7279d-c2fe-4478-b8e0-5304b17097fd STEP: Creating a pod to test consume secrets May 14 11:30:26.635: INFO: Waiting up to 5m0s for pod "pod-configmaps-1661bf0d-da3d-4c53-a102-c7dfdbd1f7f8" in namespace "secrets-6098" to be "Succeeded or Failed" May 14 11:30:26.659: INFO: Pod "pod-configmaps-1661bf0d-da3d-4c53-a102-c7dfdbd1f7f8": Phase="Pending", Reason="", readiness=false. Elapsed: 23.277386ms May 14 11:30:28.663: INFO: Pod "pod-configmaps-1661bf0d-da3d-4c53-a102-c7dfdbd1f7f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028072809s May 14 11:30:30.768: INFO: Pod "pod-configmaps-1661bf0d-da3d-4c53-a102-c7dfdbd1f7f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.132856915s STEP: Saw pod success May 14 11:30:30.768: INFO: Pod "pod-configmaps-1661bf0d-da3d-4c53-a102-c7dfdbd1f7f8" satisfied condition "Succeeded or Failed" May 14 11:30:30.772: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-1661bf0d-da3d-4c53-a102-c7dfdbd1f7f8 container env-test: STEP: delete the pod May 14 11:30:31.217: INFO: Waiting for pod pod-configmaps-1661bf0d-da3d-4c53-a102-c7dfdbd1f7f8 to disappear May 14 11:30:31.221: INFO: Pod pod-configmaps-1661bf0d-da3d-4c53-a102-c7dfdbd1f7f8 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:30:31.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6098" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":116,"skipped":1885,"failed":0} SS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:30:31.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-9689, will wait for the garbage collector to delete the pods May 14 11:30:40.081: INFO: Deleting Job.batch foo took: 5.205379ms May 14 11:30:40.381: INFO: Terminating Job.batch foo pods took: 300.312415ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:31:23.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9689" for this suite. • [SLOW TEST:52.306 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":117,"skipped":1887,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:31:23.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-172c1f66-2cd3-4b09-b70b-bf4a2c2f5a54 STEP: Creating a pod to test consume configMaps May 14 11:31:23.613: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4c4f5d4e-e3de-4c4e-b7ec-7a5e1fea4742" in namespace "projected-2573" to be "Succeeded or Failed" May 14 11:31:23.618: INFO: Pod "pod-projected-configmaps-4c4f5d4e-e3de-4c4e-b7ec-7a5e1fea4742": Phase="Pending", Reason="", readiness=false. Elapsed: 4.464434ms May 14 11:31:25.650: INFO: Pod "pod-projected-configmaps-4c4f5d4e-e3de-4c4e-b7ec-7a5e1fea4742": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037258008s May 14 11:31:27.882: INFO: Pod "pod-projected-configmaps-4c4f5d4e-e3de-4c4e-b7ec-7a5e1fea4742": Phase="Pending", Reason="", readiness=false. Elapsed: 4.268599147s May 14 11:31:29.885: INFO: Pod "pod-projected-configmaps-4c4f5d4e-e3de-4c4e-b7ec-7a5e1fea4742": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.272106551s STEP: Saw pod success May 14 11:31:29.885: INFO: Pod "pod-projected-configmaps-4c4f5d4e-e3de-4c4e-b7ec-7a5e1fea4742" satisfied condition "Succeeded or Failed" May 14 11:31:29.888: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-4c4f5d4e-e3de-4c4e-b7ec-7a5e1fea4742 container projected-configmap-volume-test: STEP: delete the pod May 14 11:31:30.158: INFO: Waiting for pod pod-projected-configmaps-4c4f5d4e-e3de-4c4e-b7ec-7a5e1fea4742 to disappear May 14 11:31:30.174: INFO: Pod pod-projected-configmaps-4c4f5d4e-e3de-4c4e-b7ec-7a5e1fea4742 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:31:30.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2573" for this suite. • [SLOW TEST:6.646 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":118,"skipped":1892,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:31:30.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 14 11:31:31.226: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 14 11:31:33.235: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052691, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052691, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052691, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052691, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 14 11:31:36.326: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:31:36.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1545" for this suite. STEP: Destroying namespace "webhook-1545-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.372 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":119,"skipped":1946,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:31:36.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-3751 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet May 14 11:31:36.654: INFO: Found 0 stateful pods, waiting for 3 May 14 11:31:46.661: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 14 11:31:46.661: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 14 11:31:46.661: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 14 11:31:56.660: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 14 11:31:56.660: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 14 11:31:56.660: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 14 11:31:56.668: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3751 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 14 11:31:56.916: INFO: stderr: "I0514 11:31:56.794720 1138 log.go:172] (0xc0009314a0) (0xc0009105a0) Create stream\nI0514 11:31:56.794791 1138 log.go:172] (0xc0009314a0) (0xc0009105a0) Stream added, broadcasting: 1\nI0514 11:31:56.799036 1138 log.go:172] (0xc0009314a0) Reply frame received for 1\nI0514 11:31:56.799079 1138 log.go:172] (0xc0009314a0) (0xc0006a7680) Create stream\nI0514 11:31:56.799095 1138 log.go:172] (0xc0009314a0) (0xc0006a7680) Stream added, broadcasting: 3\nI0514 11:31:56.799724 1138 log.go:172] (0xc0009314a0) Reply frame received for 3\nI0514 11:31:56.799797 1138 log.go:172] (0xc0009314a0) (0xc00050eaa0) Create stream\nI0514 11:31:56.799810 1138 log.go:172] (0xc0009314a0) (0xc00050eaa0) Stream added, broadcasting: 5\nI0514 11:31:56.800559 1138 log.go:172] (0xc0009314a0) Reply frame received for 5\nI0514 11:31:56.881348 1138 log.go:172] (0xc0009314a0) Data frame received for 5\nI0514 11:31:56.881375 1138 log.go:172] (0xc00050eaa0) (5) Data frame handling\nI0514 11:31:56.881386 1138 log.go:172] (0xc00050eaa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0514 11:31:56.908743 1138 log.go:172] (0xc0009314a0) Data frame received for 3\nI0514 11:31:56.908769 1138 log.go:172] (0xc0006a7680) (3) Data frame handling\nI0514 11:31:56.908789 1138 log.go:172] (0xc0006a7680) (3) Data frame sent\nI0514 11:31:56.908888 1138 log.go:172] (0xc0009314a0) Data frame received for 5\nI0514 11:31:56.908902 1138 log.go:172] (0xc00050eaa0) (5) Data frame handling\nI0514 11:31:56.909341 1138 log.go:172] (0xc0009314a0) Data frame received for 3\nI0514 11:31:56.909380 1138 log.go:172] (0xc0006a7680) (3) Data frame handling\nI0514 11:31:56.911205 1138 log.go:172] (0xc0009314a0) Data frame received for 1\nI0514 11:31:56.911218 1138 log.go:172] (0xc0009105a0) (1) Data frame handling\nI0514 11:31:56.911226 1138 log.go:172] (0xc0009105a0) (1) Data frame sent\nI0514 11:31:56.911285 1138 log.go:172] (0xc0009314a0) (0xc0009105a0) Stream removed, broadcasting: 1\nI0514 11:31:56.911536 1138 log.go:172] (0xc0009314a0) (0xc0009105a0) Stream removed, broadcasting: 1\nI0514 11:31:56.911549 1138 log.go:172] (0xc0009314a0) (0xc0006a7680) Stream removed, broadcasting: 3\nI0514 11:31:56.911665 1138 log.go:172] (0xc0009314a0) (0xc00050eaa0) Stream removed, broadcasting: 5\n" May 14 11:31:56.916: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 14 11:31:56.916: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 14 11:32:07.010: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 14 11:32:17.030: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3751 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 11:32:17.234: INFO: stderr: "I0514 11:32:17.157675 1158 log.go:172] (0xc0009cc000) (0xc00099c000) Create stream\nI0514 11:32:17.157757 1158 log.go:172] (0xc0009cc000) (0xc00099c000) Stream added, broadcasting: 1\nI0514 11:32:17.160432 1158 log.go:172] (0xc0009cc000) Reply frame received for 1\nI0514 11:32:17.160479 1158 log.go:172] (0xc0009cc000) (0xc00099c0a0) Create stream\nI0514 11:32:17.160490 1158 log.go:172] (0xc0009cc000) (0xc00099c0a0) Stream added, broadcasting: 3\nI0514 11:32:17.161668 1158 log.go:172] (0xc0009cc000) Reply frame received for 3\nI0514 11:32:17.161709 1158 log.go:172] (0xc0009cc000) (0xc00099c140) Create stream\nI0514 11:32:17.161720 1158 log.go:172] (0xc0009cc000) (0xc00099c140) Stream added, broadcasting: 5\nI0514 11:32:17.162578 1158 log.go:172] (0xc0009cc000) Reply frame received for 5\nI0514 11:32:17.222009 1158 log.go:172] (0xc0009cc000) Data frame received for 5\nI0514 11:32:17.222053 1158 log.go:172] (0xc00099c140) (5) Data frame handling\nI0514 11:32:17.222065 1158 log.go:172] (0xc00099c140) (5) Data frame sent\nI0514 11:32:17.222072 1158 log.go:172] (0xc0009cc000) Data frame received for 5\nI0514 11:32:17.222077 1158 log.go:172] (0xc00099c140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0514 11:32:17.222097 1158 log.go:172] (0xc0009cc000) Data frame received for 3\nI0514 11:32:17.222104 1158 log.go:172] (0xc00099c0a0) (3) Data frame handling\nI0514 11:32:17.222111 1158 log.go:172] (0xc00099c0a0) (3) Data frame sent\nI0514 11:32:17.222117 1158 log.go:172] (0xc0009cc000) Data frame received for 3\nI0514 11:32:17.222123 1158 log.go:172] (0xc00099c0a0) (3) Data frame handling\nI0514 11:32:17.223236 1158 log.go:172] (0xc0009cc000) Data frame received for 1\nI0514 11:32:17.223252 1158 log.go:172] (0xc00099c000) (1) Data frame handling\nI0514 11:32:17.223265 1158 log.go:172] (0xc00099c000) (1) Data frame sent\nI0514 11:32:17.223279 1158 log.go:172] (0xc0009cc000) (0xc00099c000) Stream removed, broadcasting: 1\nI0514 11:32:17.223513 1158 log.go:172] (0xc0009cc000) (0xc00099c000) Stream removed, broadcasting: 1\nI0514 11:32:17.223528 1158 log.go:172] (0xc0009cc000) (0xc00099c0a0) Stream removed, broadcasting: 3\nI0514 11:32:17.223534 1158 log.go:172] (0xc0009cc000) (0xc00099c140) Stream removed, broadcasting: 5\n" May 14 11:32:17.234: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 14 11:32:17.234: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 14 11:32:27.256: INFO: Waiting for StatefulSet statefulset-3751/ss2 to complete update May 14 11:32:27.256: INFO: Waiting for Pod statefulset-3751/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 14 11:32:27.256: INFO: Waiting for Pod statefulset-3751/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 14 11:32:37.262: INFO: Waiting for StatefulSet statefulset-3751/ss2 to complete update May 14 11:32:37.262: INFO: Waiting for Pod statefulset-3751/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision May 14 11:32:47.261: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3751 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 14 11:32:47.931: INFO: stderr: "I0514 11:32:47.380909 1179 log.go:172] (0xc0009000b0) (0xc0007ac140) Create stream\nI0514 11:32:47.380961 1179 log.go:172] (0xc0009000b0) (0xc0007ac140) Stream added, broadcasting: 1\nI0514 11:32:47.384116 1179 log.go:172] (0xc0009000b0) Reply frame received for 1\nI0514 11:32:47.384149 1179 log.go:172] (0xc0009000b0) (0xc0007be000) Create stream\nI0514 11:32:47.384160 1179 log.go:172] (0xc0009000b0) (0xc0007be000) Stream added, broadcasting: 3\nI0514 11:32:47.385107 1179 log.go:172] (0xc0009000b0) Reply frame received for 3\nI0514 11:32:47.385304 1179 log.go:172] (0xc0009000b0) (0xc0007be0a0) Create stream\nI0514 11:32:47.385323 1179 log.go:172] (0xc0009000b0) (0xc0007be0a0) Stream added, broadcasting: 5\nI0514 11:32:47.386248 1179 log.go:172] (0xc0009000b0) Reply frame received for 5\nI0514 11:32:47.487840 1179 log.go:172] (0xc0009000b0) Data frame received for 5\nI0514 11:32:47.487859 1179 log.go:172] (0xc0007be0a0) (5) Data frame handling\nI0514 11:32:47.487870 1179 log.go:172] (0xc0007be0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0514 11:32:47.926277 1179 log.go:172] (0xc0009000b0) Data frame received for 3\nI0514 11:32:47.926312 1179 log.go:172] (0xc0007be000) (3) Data frame handling\nI0514 11:32:47.926327 1179 log.go:172] (0xc0007be000) (3) Data frame sent\nI0514 11:32:47.926344 1179 log.go:172] (0xc0009000b0) Data frame received for 3\nI0514 11:32:47.926361 1179 log.go:172] (0xc0007be000) (3) Data frame handling\nI0514 11:32:47.926377 1179 log.go:172] (0xc0009000b0) Data frame received for 5\nI0514 11:32:47.926387 1179 log.go:172] (0xc0007be0a0) (5) Data frame handling\nI0514 11:32:47.927448 1179 log.go:172] (0xc0009000b0) Data frame received for 1\nI0514 11:32:47.927484 1179 log.go:172] (0xc0007ac140) (1) Data frame handling\nI0514 11:32:47.927510 1179 log.go:172] (0xc0007ac140) (1) Data frame sent\nI0514 11:32:47.927538 1179 log.go:172] (0xc0009000b0) (0xc0007ac140) Stream removed, broadcasting: 1\nI0514 11:32:47.927566 1179 log.go:172] (0xc0009000b0) Go away received\nI0514 11:32:47.927956 1179 log.go:172] (0xc0009000b0) (0xc0007ac140) Stream removed, broadcasting: 1\nI0514 11:32:47.927968 1179 log.go:172] (0xc0009000b0) (0xc0007be000) Stream removed, broadcasting: 3\nI0514 11:32:47.927974 1179 log.go:172] (0xc0009000b0) (0xc0007be0a0) Stream removed, broadcasting: 5\n" May 14 11:32:47.931: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 14 11:32:47.931: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 14 11:32:58.110: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 14 11:33:08.266: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3751 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 11:33:08.470: INFO: stderr: "I0514 11:33:08.401842 1196 log.go:172] (0xc000a45080) (0xc000b0a6e0) Create stream\nI0514 11:33:08.401892 1196 log.go:172] (0xc000a45080) (0xc000b0a6e0) Stream added, broadcasting: 1\nI0514 11:33:08.404452 1196 log.go:172] (0xc000a45080) Reply frame received for 1\nI0514 11:33:08.404493 1196 log.go:172] (0xc000a45080) (0xc000a42000) Create stream\nI0514 11:33:08.404506 1196 log.go:172] (0xc000a45080) (0xc000a42000) Stream added, broadcasting: 3\nI0514 11:33:08.405718 1196 log.go:172] (0xc000a45080) Reply frame received for 3\nI0514 11:33:08.405765 1196 log.go:172] (0xc000a45080) (0xc000b0a780) Create stream\nI0514 11:33:08.405785 1196 log.go:172] (0xc000a45080) (0xc000b0a780) Stream added, broadcasting: 5\nI0514 11:33:08.406518 1196 log.go:172] (0xc000a45080) Reply frame received for 5\nI0514 11:33:08.463301 1196 log.go:172] (0xc000a45080) Data frame received for 5\nI0514 11:33:08.463343 1196 log.go:172] (0xc000b0a780) (5) Data frame handling\nI0514 11:33:08.463361 1196 log.go:172] (0xc000b0a780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0514 11:33:08.463403 1196 log.go:172] (0xc000a45080) Data frame received for 5\nI0514 11:33:08.463438 1196 log.go:172] (0xc000b0a780) (5) Data frame handling\nI0514 11:33:08.463462 1196 log.go:172] (0xc000a45080) Data frame received for 3\nI0514 11:33:08.463472 1196 log.go:172] (0xc000a42000) (3) Data frame handling\nI0514 11:33:08.463485 1196 log.go:172] (0xc000a42000) (3) Data frame sent\nI0514 11:33:08.463546 1196 log.go:172] (0xc000a45080) Data frame received for 3\nI0514 11:33:08.463604 1196 log.go:172] (0xc000a42000) (3) Data frame handling\nI0514 11:33:08.464705 1196 log.go:172] (0xc000a45080) Data frame received for 1\nI0514 11:33:08.464732 1196 log.go:172] (0xc000b0a6e0) (1) Data frame handling\nI0514 11:33:08.464745 1196 log.go:172] (0xc000b0a6e0) (1) Data frame sent\nI0514 11:33:08.464757 1196 log.go:172] (0xc000a45080) (0xc000b0a6e0) Stream removed, broadcasting: 1\nI0514 11:33:08.464772 1196 log.go:172] (0xc000a45080) Go away received\nI0514 11:33:08.465073 1196 log.go:172] (0xc000a45080) (0xc000b0a6e0) Stream removed, broadcasting: 1\nI0514 11:33:08.465102 1196 log.go:172] (0xc000a45080) (0xc000a42000) Stream removed, broadcasting: 3\nI0514 11:33:08.465280 1196 log.go:172] (0xc000a45080) (0xc000b0a780) Stream removed, broadcasting: 5\n" May 14 11:33:08.470: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 14 11:33:08.470: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 14 11:33:28.552: INFO: Waiting for StatefulSet statefulset-3751/ss2 to complete update May 14 11:33:28.552: INFO: Waiting for Pod statefulset-3751/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 May 14 11:33:38.559: INFO: Deleting all statefulset in ns statefulset-3751 May 14 11:33:38.562: INFO: Scaling statefulset ss2 to 0 May 14 11:34:08.655: INFO: Waiting for statefulset status.replicas updated to 0 May 14 11:34:08.657: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:34:08.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3751" for this suite. • [SLOW TEST:152.348 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":120,"skipped":1973,"failed":0} [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:34:08.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9214.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9214.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9214.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9214.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9214.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9214.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9214.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9214.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9214.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9214.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9214.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 134.18.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.18.134_udp@PTR;check="$$(dig +tcp +noall +answer +search 134.18.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.18.134_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9214.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9214.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9214.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9214.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9214.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9214.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9214.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9214.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9214.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9214.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9214.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 134.18.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.18.134_udp@PTR;check="$$(dig +tcp +noall +answer +search 134.18.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.18.134_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 14 11:34:22.384: INFO: Unable to read wheezy_udp@dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:22.387: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:22.391: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:22.394: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:22.414: INFO: Unable to read jessie_udp@dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:22.417: INFO: Unable to read jessie_tcp@dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:22.420: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:22.424: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:22.443: INFO: Lookups using dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5 failed for: [wheezy_udp@dns-test-service.dns-9214.svc.cluster.local wheezy_tcp@dns-test-service.dns-9214.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local jessie_udp@dns-test-service.dns-9214.svc.cluster.local jessie_tcp@dns-test-service.dns-9214.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local] May 14 11:34:27.448: INFO: Unable to read wheezy_udp@dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:27.452: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:27.455: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:27.458: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:27.476: INFO: Unable to read jessie_udp@dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:27.479: INFO: Unable to read jessie_tcp@dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:27.481: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:27.483: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:27.495: INFO: Lookups using dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5 failed for: [wheezy_udp@dns-test-service.dns-9214.svc.cluster.local wheezy_tcp@dns-test-service.dns-9214.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local jessie_udp@dns-test-service.dns-9214.svc.cluster.local jessie_tcp@dns-test-service.dns-9214.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local] May 14 11:34:32.448: INFO: Unable to read wheezy_udp@dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:32.451: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:32.454: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:32.457: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:32.506: INFO: Unable to read jessie_udp@dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:32.508: INFO: Unable to read jessie_tcp@dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:32.510: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:32.512: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:32.527: INFO: Lookups using dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5 failed for: [wheezy_udp@dns-test-service.dns-9214.svc.cluster.local wheezy_tcp@dns-test-service.dns-9214.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local jessie_udp@dns-test-service.dns-9214.svc.cluster.local jessie_tcp@dns-test-service.dns-9214.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local] May 14 11:34:37.447: INFO: Unable to read wheezy_udp@dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:37.450: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:37.453: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:37.455: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:37.469: INFO: Unable to read jessie_udp@dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:37.471: INFO: Unable to read jessie_tcp@dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:37.472: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:37.474: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:37.486: INFO: Lookups using dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5 failed for: [wheezy_udp@dns-test-service.dns-9214.svc.cluster.local wheezy_tcp@dns-test-service.dns-9214.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local jessie_udp@dns-test-service.dns-9214.svc.cluster.local jessie_tcp@dns-test-service.dns-9214.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local] May 14 11:34:42.494: INFO: Unable to read wheezy_udp@dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:42.498: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:42.533: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:42.536: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:42.589: INFO: Unable to read jessie_udp@dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:42.592: INFO: Unable to read jessie_tcp@dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:42.710: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:42.714: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:42.832: INFO: Lookups using dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5 failed for: [wheezy_udp@dns-test-service.dns-9214.svc.cluster.local wheezy_tcp@dns-test-service.dns-9214.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local jessie_udp@dns-test-service.dns-9214.svc.cluster.local jessie_tcp@dns-test-service.dns-9214.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local] May 14 11:34:47.447: INFO: Unable to read wheezy_udp@dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:47.451: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:47.454: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:47.457: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:47.474: INFO: Unable to read jessie_udp@dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:47.476: INFO: Unable to read jessie_tcp@dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:47.478: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:47.481: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local from pod dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5: the server could not find the requested resource (get pods dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5) May 14 11:34:47.496: INFO: Lookups using dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5 failed for: [wheezy_udp@dns-test-service.dns-9214.svc.cluster.local wheezy_tcp@dns-test-service.dns-9214.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local jessie_udp@dns-test-service.dns-9214.svc.cluster.local jessie_tcp@dns-test-service.dns-9214.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9214.svc.cluster.local] May 14 11:34:52.950: INFO: DNS probes using dns-9214/dns-test-72ca85a6-17e0-40aa-97f8-38d7f81a6ef5 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:34:54.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9214" for this suite. • [SLOW TEST:45.461 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":275,"completed":121,"skipped":1973,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:34:54.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 14 11:34:57.527: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 14 11:34:59.715: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052897, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052897, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052898, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052897, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 11:35:01.758: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052897, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052897, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052898, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052897, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 11:35:03.750: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052897, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052897, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052898, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052897, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 14 11:35:06.811: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:35:07.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1497" for this suite. STEP: Destroying namespace "webhook-1497-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.978 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":122,"skipped":1978,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:35:08.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0514 11:35:09.981463 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 14 11:35:09.981: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:35:09.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8668" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":123,"skipped":1985,"failed":0} SSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:35:09.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating replication controller my-hostname-basic-4ee21e18-1317-4807-97a3-3271b6465381 May 14 11:35:10.091: INFO: Pod name my-hostname-basic-4ee21e18-1317-4807-97a3-3271b6465381: Found 0 pods out of 1 May 14 11:35:15.111: INFO: Pod name my-hostname-basic-4ee21e18-1317-4807-97a3-3271b6465381: Found 1 pods out of 1 May 14 11:35:15.111: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-4ee21e18-1317-4807-97a3-3271b6465381" are running May 14 11:35:17.570: INFO: Pod "my-hostname-basic-4ee21e18-1317-4807-97a3-3271b6465381-szdpt" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 11:35:10 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 11:35:10 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-4ee21e18-1317-4807-97a3-3271b6465381]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 11:35:10 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-4ee21e18-1317-4807-97a3-3271b6465381]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 11:35:10 +0000 UTC Reason: Message:}]) May 14 11:35:17.570: INFO: Trying to dial the pod May 14 11:35:22.582: INFO: Controller my-hostname-basic-4ee21e18-1317-4807-97a3-3271b6465381: Got expected result from replica 1 [my-hostname-basic-4ee21e18-1317-4807-97a3-3271b6465381-szdpt]: "my-hostname-basic-4ee21e18-1317-4807-97a3-3271b6465381-szdpt", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 14 11:35:22.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4816" for this suite. • [SLOW TEST:12.603 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":124,"skipped":1989,"failed":0} [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 14 11:35:22.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 14 11:35:22.689: INFO: (0) /api/v1/nodes/kali-worker2/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 14 11:35:22.852: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-4f235db4-022b-43b7-b794-39b451abcb88" in namespace "security-context-test-9528" to be "Succeeded or Failed"
May 14 11:35:22.862: INFO: Pod "alpine-nnp-false-4f235db4-022b-43b7-b794-39b451abcb88": Phase="Pending", Reason="", readiness=false. Elapsed: 10.605047ms
May 14 11:35:24.866: INFO: Pod "alpine-nnp-false-4f235db4-022b-43b7-b794-39b451abcb88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014394434s
May 14 11:35:26.870: INFO: Pod "alpine-nnp-false-4f235db4-022b-43b7-b794-39b451abcb88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018544253s
May 14 11:35:26.870: INFO: Pod "alpine-nnp-false-4f235db4-022b-43b7-b794-39b451abcb88" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:35:26.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9528" for this suite.
•{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":126,"skipped":2101,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:35:26.888: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
May 14 11:35:27.283: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 11:35:27.349: INFO: Number of nodes with available pods: 0
May 14 11:35:27.349: INFO: Node kali-worker is running more than one daemon pod
May 14 11:35:28.354: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 11:35:28.358: INFO: Number of nodes with available pods: 0
May 14 11:35:28.358: INFO: Node kali-worker is running more than one daemon pod
May 14 11:35:29.453: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 11:35:29.458: INFO: Number of nodes with available pods: 0
May 14 11:35:29.458: INFO: Node kali-worker is running more than one daemon pod
May 14 11:35:30.355: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 11:35:30.359: INFO: Number of nodes with available pods: 0
May 14 11:35:30.359: INFO: Node kali-worker is running more than one daemon pod
May 14 11:35:31.355: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 11:35:31.360: INFO: Number of nodes with available pods: 0
May 14 11:35:31.360: INFO: Node kali-worker is running more than one daemon pod
May 14 11:35:32.354: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 11:35:32.357: INFO: Number of nodes with available pods: 2
May 14 11:35:32.357: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
May 14 11:35:32.386: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 11:35:32.427: INFO: Number of nodes with available pods: 2
May 14 11:35:32.427: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3810, will wait for the garbage collector to delete the pods
May 14 11:35:33.517: INFO: Deleting DaemonSet.extensions daemon-set took: 6.248801ms
May 14 11:35:33.917: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.472532ms
May 14 11:35:43.820: INFO: Number of nodes with available pods: 0
May 14 11:35:43.820: INFO: Number of running nodes: 0, number of available pods: 0
May 14 11:35:43.822: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3810/daemonsets","resourceVersion":"4278425"},"items":null}

May 14 11:35:43.825: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3810/pods","resourceVersion":"4278425"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:35:43.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3810" for this suite.

• [SLOW TEST:16.955 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":127,"skipped":2118,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:35:43.843: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:36:43.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1392" for this suite.

• [SLOW TEST:60.104 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":128,"skipped":2149,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:36:43.948: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-3c45cf3f-f624-4ac3-bacf-676741070e08
STEP: Creating a pod to test consume secrets
May 14 11:36:44.040: INFO: Waiting up to 5m0s for pod "pod-secrets-63880a9c-94a5-4322-9d5d-d9ba17b86628" in namespace "secrets-578" to be "Succeeded or Failed"
May 14 11:36:44.052: INFO: Pod "pod-secrets-63880a9c-94a5-4322-9d5d-d9ba17b86628": Phase="Pending", Reason="", readiness=false. Elapsed: 11.981873ms
May 14 11:36:46.056: INFO: Pod "pod-secrets-63880a9c-94a5-4322-9d5d-d9ba17b86628": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016022403s
May 14 11:36:48.059: INFO: Pod "pod-secrets-63880a9c-94a5-4322-9d5d-d9ba17b86628": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019068234s
STEP: Saw pod success
May 14 11:36:48.059: INFO: Pod "pod-secrets-63880a9c-94a5-4322-9d5d-d9ba17b86628" satisfied condition "Succeeded or Failed"
May 14 11:36:48.061: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-63880a9c-94a5-4322-9d5d-d9ba17b86628 container secret-volume-test: 
STEP: delete the pod
May 14 11:36:48.142: INFO: Waiting for pod pod-secrets-63880a9c-94a5-4322-9d5d-d9ba17b86628 to disappear
May 14 11:36:48.290: INFO: Pod pod-secrets-63880a9c-94a5-4322-9d5d-d9ba17b86628 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:36:48.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-578" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":129,"skipped":2165,"failed":0}

------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:36:48.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 14 11:36:49.824: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 14 11:36:51.838: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053009, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053009, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053009, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053009, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 14 11:36:53.842: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053009, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053009, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053009, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053009, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 14 11:36:56.871: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
May 14 11:36:56.889: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:36:57.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1379" for this suite.
STEP: Destroying namespace "webhook-1379-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.429 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":130,"skipped":2165,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:36:57.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
May 14 11:37:06.788: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May 14 11:37:06.796: INFO: Pod pod-with-poststart-exec-hook still exists
May 14 11:37:08.796: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May 14 11:37:08.801: INFO: Pod pod-with-poststart-exec-hook still exists
May 14 11:37:10.796: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May 14 11:37:10.802: INFO: Pod pod-with-poststart-exec-hook still exists
May 14 11:37:12.796: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May 14 11:37:12.800: INFO: Pod pod-with-poststart-exec-hook still exists
May 14 11:37:14.796: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May 14 11:37:14.808: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:37:14.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2740" for this suite.

• [SLOW TEST:17.086 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":131,"skipped":2215,"failed":0}
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:37:14.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-sjjlz in namespace proxy-7637
I0514 11:37:14.999466       7 runners.go:190] Created replication controller with name: proxy-service-sjjlz, namespace: proxy-7637, replica count: 1
I0514 11:37:16.049867       7 runners.go:190] proxy-service-sjjlz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0514 11:37:17.050062       7 runners.go:190] proxy-service-sjjlz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0514 11:37:18.050331       7 runners.go:190] proxy-service-sjjlz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0514 11:37:19.050600       7 runners.go:190] proxy-service-sjjlz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0514 11:37:20.050803       7 runners.go:190] proxy-service-sjjlz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0514 11:37:21.050990       7 runners.go:190] proxy-service-sjjlz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0514 11:37:22.051239       7 runners.go:190] proxy-service-sjjlz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0514 11:37:23.051460       7 runners.go:190] proxy-service-sjjlz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0514 11:37:24.051731       7 runners.go:190] proxy-service-sjjlz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0514 11:37:25.051967       7 runners.go:190] proxy-service-sjjlz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0514 11:37:26.052193       7 runners.go:190] proxy-service-sjjlz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0514 11:37:27.052391       7 runners.go:190] proxy-service-sjjlz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0514 11:37:28.052577       7 runners.go:190] proxy-service-sjjlz Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May 14 11:37:28.286: INFO: setup took 13.412857231s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
May 14 11:37:28.510: INFO: (0) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:162/proxy/: bar (200; 222.994796ms)
May 14 11:37:28.510: INFO: (0) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:160/proxy/: foo (200; 223.302243ms)
May 14 11:37:28.510: INFO: (0) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j/proxy/: test (200; 223.24218ms)
May 14 11:37:28.510: INFO: (0) /api/v1/namespaces/proxy-7637/services/proxy-service-sjjlz:portname2/proxy/: bar (200; 223.177263ms)
May 14 11:37:28.510: INFO: (0) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:1080/proxy/: ... (200; 223.586096ms)
May 14 11:37:28.510: INFO: (0) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:162/proxy/: bar (200; 223.513701ms)
May 14 11:37:28.510: INFO: (0) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:1080/proxy/: test<... (200; 223.691652ms)
May 14 11:37:28.511: INFO: (0) /api/v1/namespaces/proxy-7637/services/http:proxy-service-sjjlz:portname2/proxy/: bar (200; 224.091958ms)
May 14 11:37:28.511: INFO: (0) /api/v1/namespaces/proxy-7637/services/proxy-service-sjjlz:portname1/proxy/: foo (200; 223.995335ms)
May 14 11:37:28.515: INFO: (0) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:160/proxy/: foo (200; 228.363956ms)
May 14 11:37:28.515: INFO: (0) /api/v1/namespaces/proxy-7637/services/http:proxy-service-sjjlz:portname1/proxy/: foo (200; 228.957471ms)
May 14 11:37:28.521: INFO: (0) /api/v1/namespaces/proxy-7637/services/https:proxy-service-sjjlz:tlsportname1/proxy/: tls baz (200; 234.83807ms)
May 14 11:37:28.522: INFO: (0) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:460/proxy/: tls baz (200; 234.753786ms)
May 14 11:37:28.522: INFO: (0) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:443/proxy/: test (200; 6.032814ms)
May 14 11:37:28.529: INFO: (1) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:1080/proxy/: ... (200; 5.767028ms)
May 14 11:37:28.529: INFO: (1) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:1080/proxy/: test<... (200; 5.860313ms)
May 14 11:37:28.529: INFO: (1) /api/v1/namespaces/proxy-7637/services/proxy-service-sjjlz:portname2/proxy/: bar (200; 6.378679ms)
May 14 11:37:28.533: INFO: (2) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:160/proxy/: foo (200; 3.794036ms)
May 14 11:37:28.533: INFO: (2) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:460/proxy/: tls baz (200; 4.070106ms)
May 14 11:37:28.533: INFO: (2) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:462/proxy/: tls qux (200; 4.102616ms)
May 14 11:37:28.534: INFO: (2) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:160/proxy/: foo (200; 4.198838ms)
May 14 11:37:28.534: INFO: (2) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:1080/proxy/: ... (200; 4.411129ms)
May 14 11:37:28.534: INFO: (2) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:443/proxy/: test (200; 6.099928ms)
May 14 11:37:28.535: INFO: (2) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:1080/proxy/: test<... (200; 6.182796ms)
May 14 11:37:28.539: INFO: (3) /api/v1/namespaces/proxy-7637/services/http:proxy-service-sjjlz:portname1/proxy/: foo (200; 3.88662ms)
May 14 11:37:28.539: INFO: (3) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:160/proxy/: foo (200; 3.758262ms)
May 14 11:37:28.540: INFO: (3) /api/v1/namespaces/proxy-7637/services/http:proxy-service-sjjlz:portname2/proxy/: bar (200; 4.813913ms)
May 14 11:37:28.540: INFO: (3) /api/v1/namespaces/proxy-7637/services/proxy-service-sjjlz:portname1/proxy/: foo (200; 4.858848ms)
May 14 11:37:28.540: INFO: (3) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:462/proxy/: tls qux (200; 4.906261ms)
May 14 11:37:28.541: INFO: (3) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j/proxy/: test (200; 5.392852ms)
May 14 11:37:28.541: INFO: (3) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:443/proxy/: test<... (200; 5.80427ms)
May 14 11:37:28.542: INFO: (3) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:1080/proxy/: ... (200; 5.927576ms)
May 14 11:37:28.542: INFO: (3) /api/v1/namespaces/proxy-7637/services/proxy-service-sjjlz:portname2/proxy/: bar (200; 6.136935ms)
May 14 11:37:28.542: INFO: (3) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:460/proxy/: tls baz (200; 6.226258ms)
May 14 11:37:28.542: INFO: (3) /api/v1/namespaces/proxy-7637/services/https:proxy-service-sjjlz:tlsportname2/proxy/: tls qux (200; 6.177581ms)
May 14 11:37:28.542: INFO: (3) /api/v1/namespaces/proxy-7637/services/https:proxy-service-sjjlz:tlsportname1/proxy/: tls baz (200; 6.570698ms)
May 14 11:37:28.545: INFO: (4) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:460/proxy/: tls baz (200; 2.997785ms)
May 14 11:37:28.547: INFO: (4) /api/v1/namespaces/proxy-7637/services/http:proxy-service-sjjlz:portname2/proxy/: bar (200; 4.841727ms)
May 14 11:37:28.547: INFO: (4) /api/v1/namespaces/proxy-7637/services/proxy-service-sjjlz:portname1/proxy/: foo (200; 5.081023ms)
May 14 11:37:28.547: INFO: (4) /api/v1/namespaces/proxy-7637/services/https:proxy-service-sjjlz:tlsportname2/proxy/: tls qux (200; 5.068129ms)
May 14 11:37:28.547: INFO: (4) /api/v1/namespaces/proxy-7637/services/http:proxy-service-sjjlz:portname1/proxy/: foo (200; 5.143481ms)
May 14 11:37:28.548: INFO: (4) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:1080/proxy/: test<... (200; 5.061529ms)
May 14 11:37:28.548: INFO: (4) /api/v1/namespaces/proxy-7637/services/proxy-service-sjjlz:portname2/proxy/: bar (200; 5.211611ms)
May 14 11:37:28.548: INFO: (4) /api/v1/namespaces/proxy-7637/services/https:proxy-service-sjjlz:tlsportname1/proxy/: tls baz (200; 5.144581ms)
May 14 11:37:28.548: INFO: (4) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:160/proxy/: foo (200; 5.393921ms)
May 14 11:37:28.548: INFO: (4) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:462/proxy/: tls qux (200; 5.559847ms)
May 14 11:37:28.548: INFO: (4) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:160/proxy/: foo (200; 5.487015ms)
May 14 11:37:28.548: INFO: (4) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:162/proxy/: bar (200; 5.670837ms)
May 14 11:37:28.548: INFO: (4) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:162/proxy/: bar (200; 5.684835ms)
May 14 11:37:28.548: INFO: (4) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j/proxy/: test (200; 5.860399ms)
May 14 11:37:28.548: INFO: (4) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:1080/proxy/: ... (200; 5.929782ms)
May 14 11:37:28.548: INFO: (4) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:443/proxy/: ... (200; 5.184961ms)
May 14 11:37:28.553: INFO: (5) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:160/proxy/: foo (200; 5.111995ms)
May 14 11:37:28.553: INFO: (5) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:160/proxy/: foo (200; 5.168439ms)
May 14 11:37:28.553: INFO: (5) /api/v1/namespaces/proxy-7637/services/proxy-service-sjjlz:portname1/proxy/: foo (200; 5.115928ms)
May 14 11:37:28.553: INFO: (5) /api/v1/namespaces/proxy-7637/services/https:proxy-service-sjjlz:tlsportname1/proxy/: tls baz (200; 5.200232ms)
May 14 11:37:28.554: INFO: (5) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:162/proxy/: bar (200; 5.21428ms)
May 14 11:37:28.554: INFO: (5) /api/v1/namespaces/proxy-7637/services/https:proxy-service-sjjlz:tlsportname2/proxy/: tls qux (200; 5.333431ms)
May 14 11:37:28.554: INFO: (5) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:462/proxy/: tls qux (200; 5.219052ms)
May 14 11:37:28.554: INFO: (5) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:1080/proxy/: test<... (200; 5.33667ms)
May 14 11:37:28.554: INFO: (5) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:443/proxy/: test (200; 5.481967ms)
May 14 11:37:28.554: INFO: (5) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:162/proxy/: bar (200; 5.755622ms)
May 14 11:37:28.554: INFO: (5) /api/v1/namespaces/proxy-7637/services/http:proxy-service-sjjlz:portname1/proxy/: foo (200; 5.873306ms)
May 14 11:37:28.554: INFO: (5) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:460/proxy/: tls baz (200; 5.827365ms)
May 14 11:37:28.559: INFO: (6) /api/v1/namespaces/proxy-7637/services/https:proxy-service-sjjlz:tlsportname1/proxy/: tls baz (200; 4.256285ms)
May 14 11:37:28.559: INFO: (6) /api/v1/namespaces/proxy-7637/services/http:proxy-service-sjjlz:portname1/proxy/: foo (200; 4.394213ms)
May 14 11:37:28.560: INFO: (6) /api/v1/namespaces/proxy-7637/services/proxy-service-sjjlz:portname2/proxy/: bar (200; 5.479784ms)
May 14 11:37:28.560: INFO: (6) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:160/proxy/: foo (200; 5.476982ms)
May 14 11:37:28.560: INFO: (6) /api/v1/namespaces/proxy-7637/services/proxy-service-sjjlz:portname1/proxy/: foo (200; 5.595485ms)
May 14 11:37:28.560: INFO: (6) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:162/proxy/: bar (200; 5.506877ms)
May 14 11:37:28.560: INFO: (6) /api/v1/namespaces/proxy-7637/services/https:proxy-service-sjjlz:tlsportname2/proxy/: tls qux (200; 5.612781ms)
May 14 11:37:28.560: INFO: (6) /api/v1/namespaces/proxy-7637/services/http:proxy-service-sjjlz:portname2/proxy/: bar (200; 5.574849ms)
May 14 11:37:28.560: INFO: (6) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:462/proxy/: tls qux (200; 6.126892ms)
May 14 11:37:28.560: INFO: (6) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:1080/proxy/: test<... (200; 6.198716ms)
May 14 11:37:28.560: INFO: (6) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:1080/proxy/: ... (200; 6.163511ms)
May 14 11:37:28.560: INFO: (6) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j/proxy/: test (200; 6.173717ms)
May 14 11:37:28.560: INFO: (6) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:160/proxy/: foo (200; 6.19125ms)
May 14 11:37:28.560: INFO: (6) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:460/proxy/: tls baz (200; 6.171554ms)
May 14 11:37:28.560: INFO: (6) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:162/proxy/: bar (200; 6.237975ms)
May 14 11:37:28.561: INFO: (6) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:443/proxy/: ... (200; 4.494263ms)
May 14 11:37:28.565: INFO: (7) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j/proxy/: test (200; 4.661454ms)
May 14 11:37:28.565: INFO: (7) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:1080/proxy/: test<... (200; 4.534625ms)
May 14 11:37:28.565: INFO: (7) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:162/proxy/: bar (200; 4.569614ms)
May 14 11:37:28.566: INFO: (7) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:462/proxy/: tls qux (200; 4.61531ms)
May 14 11:37:28.566: INFO: (7) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:160/proxy/: foo (200; 4.804379ms)
May 14 11:37:28.566: INFO: (7) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:443/proxy/: test (200; 2.975493ms)
May 14 11:37:28.574: INFO: (8) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:162/proxy/: bar (200; 4.542456ms)
May 14 11:37:28.574: INFO: (8) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:160/proxy/: foo (200; 5.02093ms)
May 14 11:37:28.574: INFO: (8) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:162/proxy/: bar (200; 5.024914ms)
May 14 11:37:28.574: INFO: (8) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:462/proxy/: tls qux (200; 4.983976ms)
May 14 11:37:28.574: INFO: (8) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:460/proxy/: tls baz (200; 5.070131ms)
May 14 11:37:28.575: INFO: (8) /api/v1/namespaces/proxy-7637/services/https:proxy-service-sjjlz:tlsportname1/proxy/: tls baz (200; 5.636094ms)
May 14 11:37:28.575: INFO: (8) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:1080/proxy/: ... (200; 5.664369ms)
May 14 11:37:28.575: INFO: (8) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:443/proxy/: test<... (200; 6.296823ms)
May 14 11:37:28.575: INFO: (8) /api/v1/namespaces/proxy-7637/services/proxy-service-sjjlz:portname1/proxy/: foo (200; 6.217032ms)
May 14 11:37:28.575: INFO: (8) /api/v1/namespaces/proxy-7637/services/http:proxy-service-sjjlz:portname1/proxy/: foo (200; 6.234569ms)
May 14 11:37:28.575: INFO: (8) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:160/proxy/: foo (200; 6.412337ms)
May 14 11:37:28.580: INFO: (9) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j/proxy/: test (200; 4.283015ms)
May 14 11:37:28.581: INFO: (9) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:1080/proxy/: test<... (200; 5.606483ms)
May 14 11:37:28.582: INFO: (9) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:443/proxy/: ... (200; 7.119489ms)
May 14 11:37:28.583: INFO: (9) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:162/proxy/: bar (200; 7.228878ms)
May 14 11:37:28.583: INFO: (9) /api/v1/namespaces/proxy-7637/services/https:proxy-service-sjjlz:tlsportname1/proxy/: tls baz (200; 7.149203ms)
May 14 11:37:28.583: INFO: (9) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:160/proxy/: foo (200; 7.237298ms)
May 14 11:37:28.583: INFO: (9) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:162/proxy/: bar (200; 7.247127ms)
May 14 11:37:28.583: INFO: (9) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:462/proxy/: tls qux (200; 7.267519ms)
May 14 11:37:28.585: INFO: (10) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:160/proxy/: foo (200; 2.162915ms)
May 14 11:37:28.587: INFO: (10) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:1080/proxy/: ... (200; 3.995362ms)
May 14 11:37:28.587: INFO: (10) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:443/proxy/: test<... (200; 4.644408ms)
May 14 11:37:28.588: INFO: (10) /api/v1/namespaces/proxy-7637/services/http:proxy-service-sjjlz:portname1/proxy/: foo (200; 4.638057ms)
May 14 11:37:28.588: INFO: (10) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:162/proxy/: bar (200; 4.681357ms)
May 14 11:37:28.588: INFO: (10) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:162/proxy/: bar (200; 5.516226ms)
May 14 11:37:28.588: INFO: (10) /api/v1/namespaces/proxy-7637/services/http:proxy-service-sjjlz:portname2/proxy/: bar (200; 5.484146ms)
May 14 11:37:28.588: INFO: (10) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j/proxy/: test (200; 5.632714ms)
May 14 11:37:28.588: INFO: (10) /api/v1/namespaces/proxy-7637/services/proxy-service-sjjlz:portname1/proxy/: foo (200; 5.633814ms)
May 14 11:37:28.588: INFO: (10) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:160/proxy/: foo (200; 5.619142ms)
May 14 11:37:28.588: INFO: (10) /api/v1/namespaces/proxy-7637/services/https:proxy-service-sjjlz:tlsportname1/proxy/: tls baz (200; 5.600912ms)
May 14 11:37:28.594: INFO: (11) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:1080/proxy/: test<... (200; 5.006899ms)
May 14 11:37:28.594: INFO: (11) /api/v1/namespaces/proxy-7637/services/proxy-service-sjjlz:portname2/proxy/: bar (200; 5.074067ms)
May 14 11:37:28.594: INFO: (11) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:160/proxy/: foo (200; 5.235903ms)
May 14 11:37:28.594: INFO: (11) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:443/proxy/: test (200; 5.88934ms)
May 14 11:37:28.595: INFO: (11) /api/v1/namespaces/proxy-7637/services/http:proxy-service-sjjlz:portname1/proxy/: foo (200; 5.862256ms)
May 14 11:37:28.595: INFO: (11) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:162/proxy/: bar (200; 6.004733ms)
May 14 11:37:28.595: INFO: (11) /api/v1/namespaces/proxy-7637/services/proxy-service-sjjlz:portname1/proxy/: foo (200; 6.107457ms)
May 14 11:37:28.595: INFO: (11) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:160/proxy/: foo (200; 6.188072ms)
May 14 11:37:28.595: INFO: (11) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:1080/proxy/: ... (200; 6.474404ms)
May 14 11:37:28.596: INFO: (11) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:162/proxy/: bar (200; 7.760198ms)
May 14 11:37:28.597: INFO: (11) /api/v1/namespaces/proxy-7637/services/http:proxy-service-sjjlz:portname2/proxy/: bar (200; 7.754001ms)
May 14 11:37:28.597: INFO: (11) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:462/proxy/: tls qux (200; 7.821421ms)
May 14 11:37:28.597: INFO: (11) /api/v1/namespaces/proxy-7637/services/https:proxy-service-sjjlz:tlsportname2/proxy/: tls qux (200; 8.249602ms)
May 14 11:37:28.600: INFO: (12) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:160/proxy/: foo (200; 3.13606ms)
May 14 11:37:28.601: INFO: (12) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j/proxy/: test (200; 3.463895ms)
May 14 11:37:28.602: INFO: (12) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:443/proxy/: test<... (200; 4.688076ms)
May 14 11:37:28.602: INFO: (12) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:162/proxy/: bar (200; 4.700604ms)
May 14 11:37:28.602: INFO: (12) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:460/proxy/: tls baz (200; 4.922867ms)
May 14 11:37:28.602: INFO: (12) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:1080/proxy/: ... (200; 4.964593ms)
May 14 11:37:28.602: INFO: (12) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:162/proxy/: bar (200; 4.979711ms)
May 14 11:37:28.603: INFO: (12) /api/v1/namespaces/proxy-7637/services/http:proxy-service-sjjlz:portname2/proxy/: bar (200; 5.92301ms)
May 14 11:37:28.603: INFO: (12) /api/v1/namespaces/proxy-7637/services/proxy-service-sjjlz:portname2/proxy/: bar (200; 5.922651ms)
May 14 11:37:28.603: INFO: (12) /api/v1/namespaces/proxy-7637/services/https:proxy-service-sjjlz:tlsportname1/proxy/: tls baz (200; 6.21283ms)
May 14 11:37:28.603: INFO: (12) /api/v1/namespaces/proxy-7637/services/proxy-service-sjjlz:portname1/proxy/: foo (200; 6.249051ms)
May 14 11:37:28.603: INFO: (12) /api/v1/namespaces/proxy-7637/services/http:proxy-service-sjjlz:portname1/proxy/: foo (200; 6.306314ms)
May 14 11:37:28.604: INFO: (12) /api/v1/namespaces/proxy-7637/services/https:proxy-service-sjjlz:tlsportname2/proxy/: tls qux (200; 6.391347ms)
May 14 11:37:28.610: INFO: (13) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:162/proxy/: bar (200; 6.157046ms)
May 14 11:37:28.610: INFO: (13) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j/proxy/: test (200; 6.355179ms)
May 14 11:37:28.610: INFO: (13) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:160/proxy/: foo (200; 6.452517ms)
May 14 11:37:28.610: INFO: (13) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:460/proxy/: tls baz (200; 6.422824ms)
May 14 11:37:28.610: INFO: (13) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:443/proxy/: test<... (200; 6.435921ms)
May 14 11:37:28.610: INFO: (13) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:462/proxy/: tls qux (200; 6.468679ms)
May 14 11:37:28.610: INFO: (13) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:160/proxy/: foo (200; 6.486679ms)
May 14 11:37:28.610: INFO: (13) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:1080/proxy/: ... (200; 6.477639ms)
May 14 11:37:28.611: INFO: (13) /api/v1/namespaces/proxy-7637/services/https:proxy-service-sjjlz:tlsportname2/proxy/: tls qux (200; 7.04392ms)
May 14 11:37:28.612: INFO: (13) /api/v1/namespaces/proxy-7637/services/proxy-service-sjjlz:portname1/proxy/: foo (200; 8.119971ms)
May 14 11:37:28.612: INFO: (13) /api/v1/namespaces/proxy-7637/services/http:proxy-service-sjjlz:portname1/proxy/: foo (200; 8.209443ms)
May 14 11:37:28.612: INFO: (13) /api/v1/namespaces/proxy-7637/services/https:proxy-service-sjjlz:tlsportname1/proxy/: tls baz (200; 8.29655ms)
May 14 11:37:28.612: INFO: (13) /api/v1/namespaces/proxy-7637/services/proxy-service-sjjlz:portname2/proxy/: bar (200; 8.305523ms)
May 14 11:37:28.612: INFO: (13) /api/v1/namespaces/proxy-7637/services/http:proxy-service-sjjlz:portname2/proxy/: bar (200; 8.337962ms)
May 14 11:37:28.618: INFO: (14) /api/v1/namespaces/proxy-7637/services/http:proxy-service-sjjlz:portname1/proxy/: foo (200; 5.971069ms)
May 14 11:37:28.618: INFO: (14) /api/v1/namespaces/proxy-7637/services/https:proxy-service-sjjlz:tlsportname1/proxy/: tls baz (200; 6.058274ms)
May 14 11:37:28.618: INFO: (14) /api/v1/namespaces/proxy-7637/services/https:proxy-service-sjjlz:tlsportname2/proxy/: tls qux (200; 6.148058ms)
May 14 11:37:28.618: INFO: (14) /api/v1/namespaces/proxy-7637/services/proxy-service-sjjlz:portname2/proxy/: bar (200; 6.186874ms)
May 14 11:37:28.618: INFO: (14) /api/v1/namespaces/proxy-7637/services/http:proxy-service-sjjlz:portname2/proxy/: bar (200; 6.109045ms)
May 14 11:37:28.618: INFO: (14) /api/v1/namespaces/proxy-7637/services/proxy-service-sjjlz:portname1/proxy/: foo (200; 6.134564ms)
May 14 11:37:28.618: INFO: (14) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:162/proxy/: bar (200; 6.289195ms)
May 14 11:37:28.619: INFO: (14) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:160/proxy/: foo (200; 6.50059ms)
May 14 11:37:28.619: INFO: (14) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:462/proxy/: tls qux (200; 6.677303ms)
May 14 11:37:28.619: INFO: (14) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:162/proxy/: bar (200; 6.658902ms)
May 14 11:37:28.619: INFO: (14) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:1080/proxy/: ... (200; 6.684971ms)
May 14 11:37:28.619: INFO: (14) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:160/proxy/: foo (200; 6.836297ms)
May 14 11:37:28.619: INFO: (14) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:460/proxy/: tls baz (200; 6.87719ms)
May 14 11:37:28.619: INFO: (14) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j/proxy/: test (200; 7.069514ms)
May 14 11:37:28.619: INFO: (14) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:443/proxy/: test<... (200; 7.148959ms)
May 14 11:37:28.624: INFO: (15) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:462/proxy/: tls qux (200; 4.769432ms)
May 14 11:37:28.624: INFO: (15) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:460/proxy/: tls baz (200; 4.987976ms)
May 14 11:37:28.625: INFO: (15) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:443/proxy/: test (200; 5.443252ms)
May 14 11:37:28.625: INFO: (15) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:1080/proxy/: test<... (200; 5.534956ms)
May 14 11:37:28.625: INFO: (15) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:162/proxy/: bar (200; 5.528299ms)
May 14 11:37:28.625: INFO: (15) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:160/proxy/: foo (200; 5.548084ms)
May 14 11:37:28.625: INFO: (15) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:162/proxy/: bar (200; 5.876272ms)
May 14 11:37:28.625: INFO: (15) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:1080/proxy/: ... (200; 5.995714ms)
May 14 11:37:28.625: INFO: (15) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:160/proxy/: foo (200; 6.154919ms)
May 14 11:37:28.628: INFO: (15) /api/v1/namespaces/proxy-7637/services/proxy-service-sjjlz:portname2/proxy/: bar (200; 9.134465ms)
May 14 11:37:28.628: INFO: (15) /api/v1/namespaces/proxy-7637/services/http:proxy-service-sjjlz:portname2/proxy/: bar (200; 9.134859ms)
May 14 11:37:28.629: INFO: (15) /api/v1/namespaces/proxy-7637/services/proxy-service-sjjlz:portname1/proxy/: foo (200; 9.094257ms)
May 14 11:37:28.629: INFO: (15) /api/v1/namespaces/proxy-7637/services/https:proxy-service-sjjlz:tlsportname2/proxy/: tls qux (200; 9.078002ms)
May 14 11:37:28.629: INFO: (15) /api/v1/namespaces/proxy-7637/services/http:proxy-service-sjjlz:portname1/proxy/: foo (200; 9.137883ms)
May 14 11:37:28.629: INFO: (15) /api/v1/namespaces/proxy-7637/services/https:proxy-service-sjjlz:tlsportname1/proxy/: tls baz (200; 9.352711ms)
May 14 11:37:28.632: INFO: (16) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:1080/proxy/: ... (200; 2.564132ms)
May 14 11:37:28.632: INFO: (16) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:462/proxy/: tls qux (200; 2.639565ms)
May 14 11:37:28.632: INFO: (16) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:1080/proxy/: test<... (200; 2.723351ms)
May 14 11:37:28.633: INFO: (16) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:162/proxy/: bar (200; 4.102983ms)
May 14 11:37:28.633: INFO: (16) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:160/proxy/: foo (200; 4.405545ms)
May 14 11:37:28.635: INFO: (16) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:160/proxy/: foo (200; 5.61845ms)
May 14 11:37:28.635: INFO: (16) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:162/proxy/: bar (200; 5.914885ms)
May 14 11:37:28.635: INFO: (16) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j/proxy/: test (200; 6.007058ms)
May 14 11:37:28.635: INFO: (16) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:443/proxy/: ... (200; 3.130754ms)
May 14 11:37:28.640: INFO: (17) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:443/proxy/: test<... (200; 4.39271ms)
May 14 11:37:28.641: INFO: (17) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:462/proxy/: tls qux (200; 4.445566ms)
May 14 11:37:28.641: INFO: (17) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:160/proxy/: foo (200; 4.45656ms)
May 14 11:37:28.641: INFO: (17) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:162/proxy/: bar (200; 4.430573ms)
May 14 11:37:28.641: INFO: (17) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:162/proxy/: bar (200; 4.473799ms)
May 14 11:37:28.641: INFO: (17) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:460/proxy/: tls baz (200; 4.676002ms)
May 14 11:37:28.641: INFO: (17) /api/v1/namespaces/proxy-7637/services/https:proxy-service-sjjlz:tlsportname2/proxy/: tls qux (200; 5.036625ms)
May 14 11:37:28.641: INFO: (17) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j/proxy/: test (200; 5.022568ms)
May 14 11:37:28.641: INFO: (17) /api/v1/namespaces/proxy-7637/services/proxy-service-sjjlz:portname1/proxy/: foo (200; 5.08105ms)
May 14 11:37:28.641: INFO: (17) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:160/proxy/: foo (200; 5.063149ms)
May 14 11:37:28.643: INFO: (17) /api/v1/namespaces/proxy-7637/services/http:proxy-service-sjjlz:portname2/proxy/: bar (200; 6.60498ms)
May 14 11:37:28.643: INFO: (17) /api/v1/namespaces/proxy-7637/services/http:proxy-service-sjjlz:portname1/proxy/: foo (200; 6.607852ms)
May 14 11:37:28.643: INFO: (17) /api/v1/namespaces/proxy-7637/services/proxy-service-sjjlz:portname2/proxy/: bar (200; 6.786516ms)
May 14 11:37:28.643: INFO: (17) /api/v1/namespaces/proxy-7637/services/https:proxy-service-sjjlz:tlsportname1/proxy/: tls baz (200; 6.789587ms)
May 14 11:37:28.646: INFO: (18) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:160/proxy/: foo (200; 3.149067ms)
May 14 11:37:28.646: INFO: (18) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:162/proxy/: bar (200; 3.227372ms)
May 14 11:37:28.647: INFO: (18) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:1080/proxy/: test<... (200; 3.671811ms)
May 14 11:37:28.647: INFO: (18) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:160/proxy/: foo (200; 3.578712ms)
May 14 11:37:28.647: INFO: (18) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:462/proxy/: tls qux (200; 3.854443ms)
May 14 11:37:28.647: INFO: (18) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:443/proxy/: test (200; 4.002327ms)
May 14 11:37:28.647: INFO: (18) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:1080/proxy/: ... (200; 3.902078ms)
May 14 11:37:28.647: INFO: (18) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:162/proxy/: bar (200; 4.289992ms)
May 14 11:37:28.649: INFO: (18) /api/v1/namespaces/proxy-7637/services/http:proxy-service-sjjlz:portname1/proxy/: foo (200; 5.650079ms)
May 14 11:37:28.649: INFO: (18) /api/v1/namespaces/proxy-7637/services/https:proxy-service-sjjlz:tlsportname1/proxy/: tls baz (200; 5.831202ms)
May 14 11:37:28.649: INFO: (18) /api/v1/namespaces/proxy-7637/services/proxy-service-sjjlz:portname1/proxy/: foo (200; 5.772041ms)
May 14 11:37:28.649: INFO: (18) /api/v1/namespaces/proxy-7637/services/proxy-service-sjjlz:portname2/proxy/: bar (200; 5.819273ms)
May 14 11:37:28.649: INFO: (18) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:460/proxy/: tls baz (200; 5.779159ms)
May 14 11:37:28.649: INFO: (18) /api/v1/namespaces/proxy-7637/services/http:proxy-service-sjjlz:portname2/proxy/: bar (200; 5.78949ms)
May 14 11:37:28.649: INFO: (18) /api/v1/namespaces/proxy-7637/services/https:proxy-service-sjjlz:tlsportname2/proxy/: tls qux (200; 6.282275ms)
May 14 11:37:28.652: INFO: (19) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:460/proxy/: tls baz (200; 3.017191ms)
May 14 11:37:28.652: INFO: (19) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:162/proxy/: bar (200; 2.968825ms)
May 14 11:37:28.652: INFO: (19) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:160/proxy/: foo (200; 3.007282ms)
May 14 11:37:28.654: INFO: (19) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:462/proxy/: tls qux (200; 4.076597ms)
May 14 11:37:28.654: INFO: (19) /api/v1/namespaces/proxy-7637/pods/https:proxy-service-sjjlz-lgc4j:443/proxy/: test (200; 4.271308ms)
May 14 11:37:28.654: INFO: (19) /api/v1/namespaces/proxy-7637/pods/proxy-service-sjjlz-lgc4j:1080/proxy/: test<... (200; 4.307108ms)
May 14 11:37:28.654: INFO: (19) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:162/proxy/: bar (200; 4.401205ms)
May 14 11:37:28.654: INFO: (19) /api/v1/namespaces/proxy-7637/pods/http:proxy-service-sjjlz-lgc4j:1080/proxy/: ... (200; 4.315455ms)
May 14 11:37:28.654: INFO: (19) /api/v1/namespaces/proxy-7637/services/https:proxy-service-sjjlz:tlsportname1/proxy/: tls baz (200; 4.636501ms)
May 14 11:37:28.654: INFO: (19) /api/v1/namespaces/proxy-7637/services/proxy-service-sjjlz:portname2/proxy/: bar (200; 4.701488ms)
May 14 11:37:28.655: INFO: (19) /api/v1/namespaces/proxy-7637/services/proxy-service-sjjlz:portname1/proxy/: foo (200; 5.439419ms)
May 14 11:37:28.655: INFO: (19) /api/v1/namespaces/proxy-7637/services/http:proxy-service-sjjlz:portname2/proxy/: bar (200; 5.458638ms)
May 14 11:37:28.655: INFO: (19) /api/v1/namespaces/proxy-7637/services/http:proxy-service-sjjlz:portname1/proxy/: foo (200; 5.544633ms)
May 14 11:37:28.655: INFO: (19) /api/v1/namespaces/proxy-7637/services/https:proxy-service-sjjlz:tlsportname2/proxy/: tls qux (200; 5.598994ms)
STEP: deleting ReplicationController proxy-service-sjjlz in namespace proxy-7637, will wait for the garbage collector to delete the pods
May 14 11:37:28.713: INFO: Deleting ReplicationController proxy-service-sjjlz took: 6.167096ms
May 14 11:37:29.213: INFO: Terminating ReplicationController proxy-service-sjjlz pods took: 500.29688ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:37:32.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7637" for this suite.

• [SLOW TEST:17.363 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":275,"completed":132,"skipped":2215,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:37:32.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-4468
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 14 11:37:32.268: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 14 11:37:32.563: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 14 11:37:34.619: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 14 11:37:36.567: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 14 11:37:38.567: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 14 11:37:40.567: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 14 11:37:42.567: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 14 11:37:44.567: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 14 11:37:46.567: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 14 11:37:48.567: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 14 11:37:48.574: INFO: The status of Pod netserver-1 is Running (Ready = false)
May 14 11:37:50.579: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 14 11:37:54.604: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.161:8080/dial?request=hostname&protocol=http&host=10.244.2.160&port=8080&tries=1'] Namespace:pod-network-test-4468 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 14 11:37:54.604: INFO: >>> kubeConfig: /root/.kube/config
I0514 11:37:54.634806       7 log.go:172] (0xc001e58160) (0xc000feb4a0) Create stream
I0514 11:37:54.634840       7 log.go:172] (0xc001e58160) (0xc000feb4a0) Stream added, broadcasting: 1
I0514 11:37:54.636209       7 log.go:172] (0xc001e58160) Reply frame received for 1
I0514 11:37:54.636245       7 log.go:172] (0xc001e58160) (0xc001e88500) Create stream
I0514 11:37:54.636258       7 log.go:172] (0xc001e58160) (0xc001e88500) Stream added, broadcasting: 3
I0514 11:37:54.636872       7 log.go:172] (0xc001e58160) Reply frame received for 3
I0514 11:37:54.636901       7 log.go:172] (0xc001e58160) (0xc000feb540) Create stream
I0514 11:37:54.636911       7 log.go:172] (0xc001e58160) (0xc000feb540) Stream added, broadcasting: 5
I0514 11:37:54.637888       7 log.go:172] (0xc001e58160) Reply frame received for 5
I0514 11:37:54.720584       7 log.go:172] (0xc001e58160) Data frame received for 3
I0514 11:37:54.720608       7 log.go:172] (0xc001e88500) (3) Data frame handling
I0514 11:37:54.720620       7 log.go:172] (0xc001e88500) (3) Data frame sent
I0514 11:37:54.720931       7 log.go:172] (0xc001e58160) Data frame received for 5
I0514 11:37:54.720953       7 log.go:172] (0xc000feb540) (5) Data frame handling
I0514 11:37:54.720973       7 log.go:172] (0xc001e58160) Data frame received for 3
I0514 11:37:54.720989       7 log.go:172] (0xc001e88500) (3) Data frame handling
I0514 11:37:54.722448       7 log.go:172] (0xc001e58160) Data frame received for 1
I0514 11:37:54.722483       7 log.go:172] (0xc000feb4a0) (1) Data frame handling
I0514 11:37:54.722508       7 log.go:172] (0xc000feb4a0) (1) Data frame sent
I0514 11:37:54.722524       7 log.go:172] (0xc001e58160) (0xc000feb4a0) Stream removed, broadcasting: 1
I0514 11:37:54.722544       7 log.go:172] (0xc001e58160) Go away received
I0514 11:37:54.722676       7 log.go:172] (0xc001e58160) (0xc000feb4a0) Stream removed, broadcasting: 1
I0514 11:37:54.722693       7 log.go:172] (0xc001e58160) (0xc001e88500) Stream removed, broadcasting: 3
I0514 11:37:54.722703       7 log.go:172] (0xc001e58160) (0xc000feb540) Stream removed, broadcasting: 5
May 14 11:37:54.722: INFO: Waiting for responses: map[]
May 14 11:37:54.725: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.161:8080/dial?request=hostname&protocol=http&host=10.244.1.42&port=8080&tries=1'] Namespace:pod-network-test-4468 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 14 11:37:54.725: INFO: >>> kubeConfig: /root/.kube/config
I0514 11:37:54.750615       7 log.go:172] (0xc001e58420) (0xc000feb7c0) Create stream
I0514 11:37:54.750631       7 log.go:172] (0xc001e58420) (0xc000feb7c0) Stream added, broadcasting: 1
I0514 11:37:54.752370       7 log.go:172] (0xc001e58420) Reply frame received for 1
I0514 11:37:54.752419       7 log.go:172] (0xc001e58420) (0xc000feb900) Create stream
I0514 11:37:54.752436       7 log.go:172] (0xc001e58420) (0xc000feb900) Stream added, broadcasting: 3
I0514 11:37:54.753582       7 log.go:172] (0xc001e58420) Reply frame received for 3
I0514 11:37:54.753609       7 log.go:172] (0xc001e58420) (0xc000febcc0) Create stream
I0514 11:37:54.753623       7 log.go:172] (0xc001e58420) (0xc000febcc0) Stream added, broadcasting: 5
I0514 11:37:54.754442       7 log.go:172] (0xc001e58420) Reply frame received for 5
I0514 11:37:54.923725       7 log.go:172] (0xc001e58420) Data frame received for 3
I0514 11:37:54.923746       7 log.go:172] (0xc000feb900) (3) Data frame handling
I0514 11:37:54.923760       7 log.go:172] (0xc000feb900) (3) Data frame sent
I0514 11:37:54.924304       7 log.go:172] (0xc001e58420) Data frame received for 3
I0514 11:37:54.924317       7 log.go:172] (0xc000feb900) (3) Data frame handling
I0514 11:37:54.924582       7 log.go:172] (0xc001e58420) Data frame received for 5
I0514 11:37:54.924595       7 log.go:172] (0xc000febcc0) (5) Data frame handling
I0514 11:37:54.926184       7 log.go:172] (0xc001e58420) Data frame received for 1
I0514 11:37:54.926214       7 log.go:172] (0xc000feb7c0) (1) Data frame handling
I0514 11:37:54.926261       7 log.go:172] (0xc000feb7c0) (1) Data frame sent
I0514 11:37:54.926284       7 log.go:172] (0xc001e58420) (0xc000feb7c0) Stream removed, broadcasting: 1
I0514 11:37:54.926304       7 log.go:172] (0xc001e58420) Go away received
I0514 11:37:54.926480       7 log.go:172] (0xc001e58420) (0xc000feb7c0) Stream removed, broadcasting: 1
I0514 11:37:54.926548       7 log.go:172] (0xc001e58420) (0xc000feb900) Stream removed, broadcasting: 3
I0514 11:37:54.926564       7 log.go:172] (0xc001e58420) (0xc000febcc0) Stream removed, broadcasting: 5
May 14 11:37:54.926: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:37:54.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4468" for this suite.

• [SLOW TEST:22.754 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":133,"skipped":2259,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:37:54.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 14 11:37:56.196: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 14 11:37:58.516: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053076, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053076, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053076, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053076, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 14 11:38:00.560: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053076, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053076, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053076, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053076, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 14 11:38:03.590: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 14 11:38:03.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:38:04.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5387" for this suite.
STEP: Destroying namespace "webhook-5387-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.955 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":134,"skipped":2284,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:38:04.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
May 14 11:38:04.979: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-7981 /api/v1/namespaces/watch-7981/configmaps/e2e-watch-test-resource-version 5a4fb5c5-c9aa-4616-b70b-6ecbde7394bd 4279190 0 2020-05-14 11:38:04 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-05-14 11:38:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May 14 11:38:04.979: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-7981 /api/v1/namespaces/watch-7981/configmaps/e2e-watch-test-resource-version 5a4fb5c5-c9aa-4616-b70b-6ecbde7394bd 4279191 0 2020-05-14 11:38:04 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-05-14 11:38:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:38:04.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7981" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":135,"skipped":2297,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:38:05.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 14 11:38:05.249: INFO: Waiting up to 5m0s for pod "downwardapi-volume-524fb714-90fd-4b7a-a099-7b180b0121a2" in namespace "projected-7652" to be "Succeeded or Failed"
May 14 11:38:05.601: INFO: Pod "downwardapi-volume-524fb714-90fd-4b7a-a099-7b180b0121a2": Phase="Pending", Reason="", readiness=false. Elapsed: 352.103097ms
May 14 11:38:07.605: INFO: Pod "downwardapi-volume-524fb714-90fd-4b7a-a099-7b180b0121a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.355987701s
May 14 11:38:09.609: INFO: Pod "downwardapi-volume-524fb714-90fd-4b7a-a099-7b180b0121a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.359147716s
May 14 11:38:11.613: INFO: Pod "downwardapi-volume-524fb714-90fd-4b7a-a099-7b180b0121a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.363240234s
STEP: Saw pod success
May 14 11:38:11.613: INFO: Pod "downwardapi-volume-524fb714-90fd-4b7a-a099-7b180b0121a2" satisfied condition "Succeeded or Failed"
May 14 11:38:11.616: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-524fb714-90fd-4b7a-a099-7b180b0121a2 container client-container: 
STEP: delete the pod
May 14 11:38:11.692: INFO: Waiting for pod downwardapi-volume-524fb714-90fd-4b7a-a099-7b180b0121a2 to disappear
May 14 11:38:11.699: INFO: Pod downwardapi-volume-524fb714-90fd-4b7a-a099-7b180b0121a2 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:38:11.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7652" for this suite.

• [SLOW TEST:6.640 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":136,"skipped":2299,"failed":0}
S
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:38:11.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-247c69b6-467b-44e0-8658-d61aed6ec5d2 in namespace container-probe-5824
May 14 11:38:16.045: INFO: Started pod liveness-247c69b6-467b-44e0-8658-d61aed6ec5d2 in namespace container-probe-5824
STEP: checking the pod's current state and verifying that restartCount is present
May 14 11:38:16.048: INFO: Initial restart count of pod liveness-247c69b6-467b-44e0-8658-d61aed6ec5d2 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:42:17.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5824" for this suite.

• [SLOW TEST:246.041 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":137,"skipped":2300,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:42:17.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override all
May 14 11:42:18.609: INFO: Waiting up to 5m0s for pod "client-containers-2bf81da6-ac2f-4696-8535-7ad57138ecb3" in namespace "containers-9852" to be "Succeeded or Failed"
May 14 11:42:18.678: INFO: Pod "client-containers-2bf81da6-ac2f-4696-8535-7ad57138ecb3": Phase="Pending", Reason="", readiness=false. Elapsed: 68.988806ms
May 14 11:42:20.859: INFO: Pod "client-containers-2bf81da6-ac2f-4696-8535-7ad57138ecb3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.249782735s
May 14 11:42:22.979: INFO: Pod "client-containers-2bf81da6-ac2f-4696-8535-7ad57138ecb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.369645557s
STEP: Saw pod success
May 14 11:42:22.979: INFO: Pod "client-containers-2bf81da6-ac2f-4696-8535-7ad57138ecb3" satisfied condition "Succeeded or Failed"
May 14 11:42:22.981: INFO: Trying to get logs from node kali-worker pod client-containers-2bf81da6-ac2f-4696-8535-7ad57138ecb3 container test-container: 
STEP: delete the pod
May 14 11:42:23.212: INFO: Waiting for pod client-containers-2bf81da6-ac2f-4696-8535-7ad57138ecb3 to disappear
May 14 11:42:23.219: INFO: Pod client-containers-2bf81da6-ac2f-4696-8535-7ad57138ecb3 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:42:23.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9852" for this suite.

• [SLOW TEST:5.528 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":138,"skipped":2353,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:42:23.279: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 14 11:42:23.888: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 14 11:42:25.912: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053343, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053343, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053343, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053343, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 14 11:42:27.916: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053343, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053343, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053343, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053343, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 14 11:42:30.953: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 14 11:42:30.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6955-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:42:32.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3361" for this suite.
STEP: Destroying namespace "webhook-3361-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.037 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":139,"skipped":2361,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:42:32.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-45cfb4b4-1192-42e5-9f88-4ce5b96a69a5
STEP: Creating a pod to test consume secrets
May 14 11:42:32.462: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-87ce3dea-5e6e-48eb-93fd-c8c7c432a375" in namespace "projected-7544" to be "Succeeded or Failed"
May 14 11:42:32.476: INFO: Pod "pod-projected-secrets-87ce3dea-5e6e-48eb-93fd-c8c7c432a375": Phase="Pending", Reason="", readiness=false. Elapsed: 13.431528ms
May 14 11:42:34.479: INFO: Pod "pod-projected-secrets-87ce3dea-5e6e-48eb-93fd-c8c7c432a375": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01664492s
May 14 11:42:36.483: INFO: Pod "pod-projected-secrets-87ce3dea-5e6e-48eb-93fd-c8c7c432a375": Phase="Running", Reason="", readiness=true. Elapsed: 4.020868602s
May 14 11:42:38.489: INFO: Pod "pod-projected-secrets-87ce3dea-5e6e-48eb-93fd-c8c7c432a375": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026667486s
STEP: Saw pod success
May 14 11:42:38.489: INFO: Pod "pod-projected-secrets-87ce3dea-5e6e-48eb-93fd-c8c7c432a375" satisfied condition "Succeeded or Failed"
May 14 11:42:38.492: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-87ce3dea-5e6e-48eb-93fd-c8c7c432a375 container projected-secret-volume-test: 
STEP: delete the pod
May 14 11:42:38.527: INFO: Waiting for pod pod-projected-secrets-87ce3dea-5e6e-48eb-93fd-c8c7c432a375 to disappear
May 14 11:42:38.539: INFO: Pod pod-projected-secrets-87ce3dea-5e6e-48eb-93fd-c8c7c432a375 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:42:38.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7544" for this suite.

• [SLOW TEST:6.230 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":140,"skipped":2441,"failed":0}
SSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:42:38.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-e1a11b69-77d3-4e0e-a030-78aca1d392a3
STEP: Creating a pod to test consume secrets
May 14 11:42:38.695: INFO: Waiting up to 5m0s for pod "pod-secrets-df40c339-7c9d-4716-ac79-5c0b34d5351b" in namespace "secrets-6950" to be "Succeeded or Failed"
May 14 11:42:38.707: INFO: Pod "pod-secrets-df40c339-7c9d-4716-ac79-5c0b34d5351b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.93791ms
May 14 11:42:40.792: INFO: Pod "pod-secrets-df40c339-7c9d-4716-ac79-5c0b34d5351b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097002996s
May 14 11:42:42.797: INFO: Pod "pod-secrets-df40c339-7c9d-4716-ac79-5c0b34d5351b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.101951916s
STEP: Saw pod success
May 14 11:42:42.797: INFO: Pod "pod-secrets-df40c339-7c9d-4716-ac79-5c0b34d5351b" satisfied condition "Succeeded or Failed"
May 14 11:42:42.800: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-df40c339-7c9d-4716-ac79-5c0b34d5351b container secret-volume-test: 
STEP: delete the pod
May 14 11:42:42.858: INFO: Waiting for pod pod-secrets-df40c339-7c9d-4716-ac79-5c0b34d5351b to disappear
May 14 11:42:42.890: INFO: Pod pod-secrets-df40c339-7c9d-4716-ac79-5c0b34d5351b no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:42:42.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6950" for this suite.
STEP: Destroying namespace "secret-namespace-912" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":141,"skipped":2446,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:42:42.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7136.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7136.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May 14 11:42:53.308: INFO: DNS probes using dns-7136/dns-test-a0b1f1cc-9638-46ee-bbbc-5ddc18f9af2d succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:42:53.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7136" for this suite.

• [SLOW TEST:10.410 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":275,"completed":142,"skipped":2458,"failed":0}
SSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:42:53.361: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-54879dd7-8ace-484c-b3b8-fcbb1eecfbde
STEP: Creating a pod to test consume secrets
May 14 11:42:53.795: INFO: Waiting up to 5m0s for pod "pod-secrets-08d6bb09-fc72-4619-a69f-fa72b26afefb" in namespace "secrets-7441" to be "Succeeded or Failed"
May 14 11:42:53.804: INFO: Pod "pod-secrets-08d6bb09-fc72-4619-a69f-fa72b26afefb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.565764ms
May 14 11:42:55.809: INFO: Pod "pod-secrets-08d6bb09-fc72-4619-a69f-fa72b26afefb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01365837s
May 14 11:42:57.814: INFO: Pod "pod-secrets-08d6bb09-fc72-4619-a69f-fa72b26afefb": Phase="Running", Reason="", readiness=true. Elapsed: 4.018295546s
May 14 11:42:59.818: INFO: Pod "pod-secrets-08d6bb09-fc72-4619-a69f-fa72b26afefb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022526422s
STEP: Saw pod success
May 14 11:42:59.818: INFO: Pod "pod-secrets-08d6bb09-fc72-4619-a69f-fa72b26afefb" satisfied condition "Succeeded or Failed"
May 14 11:42:59.820: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-08d6bb09-fc72-4619-a69f-fa72b26afefb container secret-env-test: 
STEP: delete the pod
May 14 11:42:59.837: INFO: Waiting for pod pod-secrets-08d6bb09-fc72-4619-a69f-fa72b26afefb to disappear
May 14 11:42:59.842: INFO: Pod pod-secrets-08d6bb09-fc72-4619-a69f-fa72b26afefb no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:42:59.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7441" for this suite.

• [SLOW TEST:6.489 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":143,"skipped":2461,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:42:59.850: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 14 11:43:00.649: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 14 11:43:02.756: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053380, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053380, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053380, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053380, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 14 11:43:04.760: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053380, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053380, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053380, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053380, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 14 11:43:07.798: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:43:07.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4790" for this suite.
STEP: Destroying namespace "webhook-4790-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.246 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":144,"skipped":2479,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:43:08.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:43:12.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8187" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":145,"skipped":2503,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:43:12.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on node default medium
May 14 11:43:12.246: INFO: Waiting up to 5m0s for pod "pod-9a7d2e17-7532-40aa-9479-49a89fc62e44" in namespace "emptydir-8585" to be "Succeeded or Failed"
May 14 11:43:12.250: INFO: Pod "pod-9a7d2e17-7532-40aa-9479-49a89fc62e44": Phase="Pending", Reason="", readiness=false. Elapsed: 3.783916ms
May 14 11:43:14.344: INFO: Pod "pod-9a7d2e17-7532-40aa-9479-49a89fc62e44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098204988s
May 14 11:43:16.348: INFO: Pod "pod-9a7d2e17-7532-40aa-9479-49a89fc62e44": Phase="Running", Reason="", readiness=true. Elapsed: 4.101828468s
May 14 11:43:18.352: INFO: Pod "pod-9a7d2e17-7532-40aa-9479-49a89fc62e44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.105824189s
STEP: Saw pod success
May 14 11:43:18.352: INFO: Pod "pod-9a7d2e17-7532-40aa-9479-49a89fc62e44" satisfied condition "Succeeded or Failed"
May 14 11:43:18.354: INFO: Trying to get logs from node kali-worker pod pod-9a7d2e17-7532-40aa-9479-49a89fc62e44 container test-container: 
STEP: delete the pod
May 14 11:43:18.391: INFO: Waiting for pod pod-9a7d2e17-7532-40aa-9479-49a89fc62e44 to disappear
May 14 11:43:18.404: INFO: Pod pod-9a7d2e17-7532-40aa-9479-49a89fc62e44 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:43:18.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8585" for this suite.

• [SLOW TEST:6.238 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":146,"skipped":2516,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:43:18.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name projected-secret-test-a6ddad48-d4aa-42a9-a1a3-f7700326fe03
STEP: Creating a pod to test consume secrets
May 14 11:43:18.635: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d84bc924-7bfc-4327-a61c-9a3292de9e84" in namespace "projected-6604" to be "Succeeded or Failed"
May 14 11:43:18.678: INFO: Pod "pod-projected-secrets-d84bc924-7bfc-4327-a61c-9a3292de9e84": Phase="Pending", Reason="", readiness=false. Elapsed: 43.22991ms
May 14 11:43:20.697: INFO: Pod "pod-projected-secrets-d84bc924-7bfc-4327-a61c-9a3292de9e84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062652248s
May 14 11:43:22.702: INFO: Pod "pod-projected-secrets-d84bc924-7bfc-4327-a61c-9a3292de9e84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067059861s
STEP: Saw pod success
May 14 11:43:22.702: INFO: Pod "pod-projected-secrets-d84bc924-7bfc-4327-a61c-9a3292de9e84" satisfied condition "Succeeded or Failed"
May 14 11:43:22.705: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-d84bc924-7bfc-4327-a61c-9a3292de9e84 container secret-volume-test: 
STEP: delete the pod
May 14 11:43:22.765: INFO: Waiting for pod pod-projected-secrets-d84bc924-7bfc-4327-a61c-9a3292de9e84 to disappear
May 14 11:43:22.829: INFO: Pod pod-projected-secrets-d84bc924-7bfc-4327-a61c-9a3292de9e84 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:43:22.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6604" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":147,"skipped":2548,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:43:22.837: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test substitution in container's args
May 14 11:43:23.014: INFO: Waiting up to 5m0s for pod "var-expansion-eb0a6fe2-ad42-4eab-a567-2275b4caee8e" in namespace "var-expansion-9214" to be "Succeeded or Failed"
May 14 11:43:23.017: INFO: Pod "var-expansion-eb0a6fe2-ad42-4eab-a567-2275b4caee8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.69338ms
May 14 11:43:25.147: INFO: Pod "var-expansion-eb0a6fe2-ad42-4eab-a567-2275b4caee8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132148718s
May 14 11:43:27.150: INFO: Pod "var-expansion-eb0a6fe2-ad42-4eab-a567-2275b4caee8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.135860678s
STEP: Saw pod success
May 14 11:43:27.150: INFO: Pod "var-expansion-eb0a6fe2-ad42-4eab-a567-2275b4caee8e" satisfied condition "Succeeded or Failed"
May 14 11:43:27.153: INFO: Trying to get logs from node kali-worker2 pod var-expansion-eb0a6fe2-ad42-4eab-a567-2275b4caee8e container dapi-container: 
STEP: delete the pod
May 14 11:43:27.278: INFO: Waiting for pod var-expansion-eb0a6fe2-ad42-4eab-a567-2275b4caee8e to disappear
May 14 11:43:27.298: INFO: Pod var-expansion-eb0a6fe2-ad42-4eab-a567-2275b4caee8e no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:43:27.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9214" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":148,"skipped":2565,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:43:27.304: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:43:27.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7664" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":149,"skipped":2585,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:43:27.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-secret-t2d8
STEP: Creating a pod to test atomic-volume-subpath
May 14 11:43:27.756: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-t2d8" in namespace "subpath-8659" to be "Succeeded or Failed"
May 14 11:43:27.781: INFO: Pod "pod-subpath-test-secret-t2d8": Phase="Pending", Reason="", readiness=false. Elapsed: 25.334473ms
May 14 11:43:29.878: INFO: Pod "pod-subpath-test-secret-t2d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12154995s
May 14 11:43:31.883: INFO: Pod "pod-subpath-test-secret-t2d8": Phase="Running", Reason="", readiness=true. Elapsed: 4.126368495s
May 14 11:43:33.887: INFO: Pod "pod-subpath-test-secret-t2d8": Phase="Running", Reason="", readiness=true. Elapsed: 6.130836251s
May 14 11:43:35.892: INFO: Pod "pod-subpath-test-secret-t2d8": Phase="Running", Reason="", readiness=true. Elapsed: 8.135522982s
May 14 11:43:37.897: INFO: Pod "pod-subpath-test-secret-t2d8": Phase="Running", Reason="", readiness=true. Elapsed: 10.141019434s
May 14 11:43:39.901: INFO: Pod "pod-subpath-test-secret-t2d8": Phase="Running", Reason="", readiness=true. Elapsed: 12.145149994s
May 14 11:43:41.904: INFO: Pod "pod-subpath-test-secret-t2d8": Phase="Running", Reason="", readiness=true. Elapsed: 14.147690355s
May 14 11:43:43.908: INFO: Pod "pod-subpath-test-secret-t2d8": Phase="Running", Reason="", readiness=true. Elapsed: 16.151808386s
May 14 11:43:45.913: INFO: Pod "pod-subpath-test-secret-t2d8": Phase="Running", Reason="", readiness=true. Elapsed: 18.157221759s
May 14 11:43:47.917: INFO: Pod "pod-subpath-test-secret-t2d8": Phase="Running", Reason="", readiness=true. Elapsed: 20.161173024s
May 14 11:43:49.922: INFO: Pod "pod-subpath-test-secret-t2d8": Phase="Running", Reason="", readiness=true. Elapsed: 22.165538678s
May 14 11:43:51.926: INFO: Pod "pod-subpath-test-secret-t2d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.169734167s
STEP: Saw pod success
May 14 11:43:51.926: INFO: Pod "pod-subpath-test-secret-t2d8" satisfied condition "Succeeded or Failed"
May 14 11:43:51.930: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-secret-t2d8 container test-container-subpath-secret-t2d8: 
STEP: delete the pod
May 14 11:43:51.983: INFO: Waiting for pod pod-subpath-test-secret-t2d8 to disappear
May 14 11:43:52.151: INFO: Pod pod-subpath-test-secret-t2d8 no longer exists
STEP: Deleting pod pod-subpath-test-secret-t2d8
May 14 11:43:52.151: INFO: Deleting pod "pod-subpath-test-secret-t2d8" in namespace "subpath-8659"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:43:52.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8659" for this suite.

• [SLOW TEST:24.594 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":150,"skipped":2626,"failed":0}
S
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:43:52.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 14 11:43:52.284: INFO: (0) /api/v1/nodes/kali-worker:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 14 11:43:53.195: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 14 11:43:55.204: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053433, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053433, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053433, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053433, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 14 11:43:58.240: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
May 14 11:44:02.340: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config attach --namespace=webhook-7266 to-be-attached-pod -i -c=container1'
May 14 11:44:07.522: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:44:07.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7266" for this suite.
STEP: Destroying namespace "webhook-7266-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:15.343 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":152,"skipped":2629,"failed":0}
SSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:44:07.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-e77aa39e-42e1-4412-83df-34f05f3a9377
STEP: Creating configMap with name cm-test-opt-upd-f9cf80ca-62b2-487f-bd58-6197ac453897
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-e77aa39e-42e1-4412-83df-34f05f3a9377
STEP: Updating configmap cm-test-opt-upd-f9cf80ca-62b2-487f-bd58-6197ac453897
STEP: Creating configMap with name cm-test-opt-create-00e962ad-5ab9-482b-b1fa-5bbe8a43d51f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:45:35.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8666" for this suite.

• [SLOW TEST:87.853 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":153,"skipped":2633,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:45:35.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
May 14 11:45:36.451: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
May 14 11:45:38.557: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053536, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053536, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053536, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053536, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 14 11:45:40.562: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053536, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053536, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053536, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053536, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 14 11:45:43.952: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 14 11:45:43.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:45:45.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-9660" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137

• [SLOW TEST:9.672 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":154,"skipped":2634,"failed":0}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:45:45.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 14 11:45:45.292: INFO: Waiting up to 5m0s for pod "downwardapi-volume-267787b6-ee3d-4d09-b675-7747c0f98ef3" in namespace "projected-9701" to be "Succeeded or Failed"
May 14 11:45:45.296: INFO: Pod "downwardapi-volume-267787b6-ee3d-4d09-b675-7747c0f98ef3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.637666ms
May 14 11:45:47.299: INFO: Pod "downwardapi-volume-267787b6-ee3d-4d09-b675-7747c0f98ef3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007160667s
May 14 11:45:49.308: INFO: Pod "downwardapi-volume-267787b6-ee3d-4d09-b675-7747c0f98ef3": Phase="Running", Reason="", readiness=true. Elapsed: 4.016293678s
May 14 11:45:51.311: INFO: Pod "downwardapi-volume-267787b6-ee3d-4d09-b675-7747c0f98ef3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018964333s
STEP: Saw pod success
May 14 11:45:51.311: INFO: Pod "downwardapi-volume-267787b6-ee3d-4d09-b675-7747c0f98ef3" satisfied condition "Succeeded or Failed"
May 14 11:45:51.313: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-267787b6-ee3d-4d09-b675-7747c0f98ef3 container client-container: 
STEP: delete the pod
May 14 11:45:51.365: INFO: Waiting for pod downwardapi-volume-267787b6-ee3d-4d09-b675-7747c0f98ef3 to disappear
May 14 11:45:51.374: INFO: Pod downwardapi-volume-267787b6-ee3d-4d09-b675-7747c0f98ef3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:45:51.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9701" for this suite.

• [SLOW TEST:6.135 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":155,"skipped":2635,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:45:51.380: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:45:58.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5378" for this suite.

• [SLOW TEST:7.092 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":156,"skipped":2657,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:45:58.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 14 11:45:58.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
May 14 11:46:00.562: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1214 create -f -'
May 14 11:46:05.279: INFO: stderr: ""
May 14 11:46:05.279: INFO: stdout: "e2e-test-crd-publish-openapi-8868-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
May 14 11:46:05.279: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1214 delete e2e-test-crd-publish-openapi-8868-crds test-cr'
May 14 11:46:05.464: INFO: stderr: ""
May 14 11:46:05.464: INFO: stdout: "e2e-test-crd-publish-openapi-8868-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
May 14 11:46:05.464: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1214 apply -f -'
May 14 11:46:05.739: INFO: stderr: ""
May 14 11:46:05.739: INFO: stdout: "e2e-test-crd-publish-openapi-8868-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
May 14 11:46:05.739: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1214 delete e2e-test-crd-publish-openapi-8868-crds test-cr'
May 14 11:46:05.859: INFO: stderr: ""
May 14 11:46:05.859: INFO: stdout: "e2e-test-crd-publish-openapi-8868-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
May 14 11:46:05.859: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8868-crds'
May 14 11:46:06.107: INFO: stderr: ""
May 14 11:46:06.107: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8868-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:46:09.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1214" for this suite.

• [SLOW TEST:10.593 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":157,"skipped":2664,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:46:09.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on node default medium
May 14 11:46:09.133: INFO: Waiting up to 5m0s for pod "pod-b4d85e41-febf-4b81-a5a3-36a599775aed" in namespace "emptydir-1686" to be "Succeeded or Failed"
May 14 11:46:09.141: INFO: Pod "pod-b4d85e41-febf-4b81-a5a3-36a599775aed": Phase="Pending", Reason="", readiness=false. Elapsed: 7.885978ms
May 14 11:46:11.145: INFO: Pod "pod-b4d85e41-febf-4b81-a5a3-36a599775aed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011837676s
May 14 11:46:13.150: INFO: Pod "pod-b4d85e41-febf-4b81-a5a3-36a599775aed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016112921s
STEP: Saw pod success
May 14 11:46:13.150: INFO: Pod "pod-b4d85e41-febf-4b81-a5a3-36a599775aed" satisfied condition "Succeeded or Failed"
May 14 11:46:13.152: INFO: Trying to get logs from node kali-worker2 pod pod-b4d85e41-febf-4b81-a5a3-36a599775aed container test-container: 
STEP: delete the pod
May 14 11:46:13.330: INFO: Waiting for pod pod-b4d85e41-febf-4b81-a5a3-36a599775aed to disappear
May 14 11:46:13.376: INFO: Pod pod-b4d85e41-febf-4b81-a5a3-36a599775aed no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:46:13.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1686" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":158,"skipped":2676,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:46:13.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test hostPath mode
May 14 11:46:13.485: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3418" to be "Succeeded or Failed"
May 14 11:46:13.514: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 28.936794ms
May 14 11:46:15.544: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058723956s
May 14 11:46:17.547: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061696386s
May 14 11:46:19.551: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.065815607s
STEP: Saw pod success
May 14 11:46:19.551: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
May 14 11:46:19.554: INFO: Trying to get logs from node kali-worker2 pod pod-host-path-test container test-container-1: 
STEP: delete the pod
May 14 11:46:19.636: INFO: Waiting for pod pod-host-path-test to disappear
May 14 11:46:19.651: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:46:19.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-3418" for this suite.

• [SLOW TEST:6.272 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":159,"skipped":2687,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:46:19.659: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
May 14 11:46:27.879: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
May 14 11:46:28.030: INFO: Pod pod-with-prestop-exec-hook still exists
May 14 11:46:30.030: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
May 14 11:46:30.033: INFO: Pod pod-with-prestop-exec-hook still exists
May 14 11:46:32.030: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
May 14 11:46:32.035: INFO: Pod pod-with-prestop-exec-hook still exists
May 14 11:46:34.030: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
May 14 11:46:34.034: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:46:34.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-958" for this suite.

• [SLOW TEST:14.392 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":160,"skipped":2703,"failed":0}
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:46:34.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-76307823-8a78-438f-82fb-f151c3518c26
STEP: Creating a pod to test consume configMaps
May 14 11:46:34.199: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-462e301c-4b40-41fb-9c0f-4ebd6480a4a5" in namespace "projected-9134" to be "Succeeded or Failed"
May 14 11:46:34.203: INFO: Pod "pod-projected-configmaps-462e301c-4b40-41fb-9c0f-4ebd6480a4a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.004625ms
May 14 11:46:36.208: INFO: Pod "pod-projected-configmaps-462e301c-4b40-41fb-9c0f-4ebd6480a4a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008693827s
May 14 11:46:38.212: INFO: Pod "pod-projected-configmaps-462e301c-4b40-41fb-9c0f-4ebd6480a4a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013253762s
STEP: Saw pod success
May 14 11:46:38.212: INFO: Pod "pod-projected-configmaps-462e301c-4b40-41fb-9c0f-4ebd6480a4a5" satisfied condition "Succeeded or Failed"
May 14 11:46:38.216: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-462e301c-4b40-41fb-9c0f-4ebd6480a4a5 container projected-configmap-volume-test: 
STEP: delete the pod
May 14 11:46:38.397: INFO: Waiting for pod pod-projected-configmaps-462e301c-4b40-41fb-9c0f-4ebd6480a4a5 to disappear
May 14 11:46:38.449: INFO: Pod pod-projected-configmaps-462e301c-4b40-41fb-9c0f-4ebd6480a4a5 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:46:38.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9134" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":161,"skipped":2704,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:46:38.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: set up a multi version CRD
May 14 11:46:38.634: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:46:54.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8458" for this suite.

• [SLOW TEST:15.729 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":162,"skipped":2744,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:46:54.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
May 14 11:46:58.580: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:46:58.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4820" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":163,"skipped":2769,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:46:58.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
May 14 11:47:04.915: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-2684 PodName:pod-sharedvolume-c16ca5d0-379d-4032-b870-cc0f1d5a1488 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 14 11:47:04.915: INFO: >>> kubeConfig: /root/.kube/config
I0514 11:47:04.944314       7 log.go:172] (0xc0020d7810) (0xc001ae6aa0) Create stream
I0514 11:47:04.944340       7 log.go:172] (0xc0020d7810) (0xc001ae6aa0) Stream added, broadcasting: 1
I0514 11:47:04.946017       7 log.go:172] (0xc0020d7810) Reply frame received for 1
I0514 11:47:04.946051       7 log.go:172] (0xc0020d7810) (0xc0011ed9a0) Create stream
I0514 11:47:04.946060       7 log.go:172] (0xc0020d7810) (0xc0011ed9a0) Stream added, broadcasting: 3
I0514 11:47:04.946798       7 log.go:172] (0xc0020d7810) Reply frame received for 3
I0514 11:47:04.946814       7 log.go:172] (0xc0020d7810) (0xc0011edb80) Create stream
I0514 11:47:04.946822       7 log.go:172] (0xc0020d7810) (0xc0011edb80) Stream added, broadcasting: 5
I0514 11:47:04.947602       7 log.go:172] (0xc0020d7810) Reply frame received for 5
I0514 11:47:05.009258       7 log.go:172] (0xc0020d7810) Data frame received for 3
I0514 11:47:05.009291       7 log.go:172] (0xc0011ed9a0) (3) Data frame handling
I0514 11:47:05.009299       7 log.go:172] (0xc0011ed9a0) (3) Data frame sent
I0514 11:47:05.009304       7 log.go:172] (0xc0020d7810) Data frame received for 3
I0514 11:47:05.009308       7 log.go:172] (0xc0011ed9a0) (3) Data frame handling
I0514 11:47:05.009354       7 log.go:172] (0xc0020d7810) Data frame received for 5
I0514 11:47:05.009362       7 log.go:172] (0xc0011edb80) (5) Data frame handling
I0514 11:47:05.010893       7 log.go:172] (0xc0020d7810) Data frame received for 1
I0514 11:47:05.010932       7 log.go:172] (0xc001ae6aa0) (1) Data frame handling
I0514 11:47:05.010954       7 log.go:172] (0xc001ae6aa0) (1) Data frame sent
I0514 11:47:05.010969       7 log.go:172] (0xc0020d7810) (0xc001ae6aa0) Stream removed, broadcasting: 1
I0514 11:47:05.010995       7 log.go:172] (0xc0020d7810) Go away received
I0514 11:47:05.011146       7 log.go:172] (0xc0020d7810) (0xc001ae6aa0) Stream removed, broadcasting: 1
I0514 11:47:05.011172       7 log.go:172] (0xc0020d7810) (0xc0011ed9a0) Stream removed, broadcasting: 3
I0514 11:47:05.011189       7 log.go:172] (0xc0020d7810) (0xc0011edb80) Stream removed, broadcasting: 5
May 14 11:47:05.011: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:47:05.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2684" for this suite.

• [SLOW TEST:6.357 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":164,"skipped":2811,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:47:05.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 14 11:47:05.813: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 14 11:47:07.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053625, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053625, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053625, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053625, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 14 11:47:10.901: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:47:11.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6178" for this suite.
STEP: Destroying namespace "webhook-6178-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.179 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":165,"skipped":2835,"failed":0}
SSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:47:11.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod busybox-4e0cd912-62c7-41e7-87f5-13a1edf0e3dd in namespace container-probe-7624
May 14 11:47:17.371: INFO: Started pod busybox-4e0cd912-62c7-41e7-87f5-13a1edf0e3dd in namespace container-probe-7624
STEP: checking the pod's current state and verifying that restartCount is present
May 14 11:47:17.374: INFO: Initial restart count of pod busybox-4e0cd912-62c7-41e7-87f5-13a1edf0e3dd is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:51:19.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7624" for this suite.

• [SLOW TEST:248.154 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":166,"skipped":2838,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:51:19.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0514 11:51:29.835407       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May 14 11:51:29.835: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:51:29.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2229" for this suite.

• [SLOW TEST:10.489 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":167,"skipped":2845,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:51:29.843: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
May 14 11:51:30.086: INFO: Waiting up to 5m0s for pod "downward-api-a3deaf87-e37f-4a47-a9d0-c9c1321684f2" in namespace "downward-api-635" to be "Succeeded or Failed"
May 14 11:51:30.096: INFO: Pod "downward-api-a3deaf87-e37f-4a47-a9d0-c9c1321684f2": Phase="Pending", Reason="", readiness=false. Elapsed: 9.323425ms
May 14 11:51:32.099: INFO: Pod "downward-api-a3deaf87-e37f-4a47-a9d0-c9c1321684f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012858307s
May 14 11:51:34.103: INFO: Pod "downward-api-a3deaf87-e37f-4a47-a9d0-c9c1321684f2": Phase="Running", Reason="", readiness=true. Elapsed: 4.017253639s
May 14 11:51:36.108: INFO: Pod "downward-api-a3deaf87-e37f-4a47-a9d0-c9c1321684f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021517166s
STEP: Saw pod success
May 14 11:51:36.108: INFO: Pod "downward-api-a3deaf87-e37f-4a47-a9d0-c9c1321684f2" satisfied condition "Succeeded or Failed"
May 14 11:51:36.111: INFO: Trying to get logs from node kali-worker pod downward-api-a3deaf87-e37f-4a47-a9d0-c9c1321684f2 container dapi-container: 
STEP: delete the pod
May 14 11:51:36.168: INFO: Waiting for pod downward-api-a3deaf87-e37f-4a47-a9d0-c9c1321684f2 to disappear
May 14 11:51:36.180: INFO: Pod downward-api-a3deaf87-e37f-4a47-a9d0-c9c1321684f2 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:51:36.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-635" for this suite.

• [SLOW TEST:6.345 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":168,"skipped":2889,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:51:36.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:51:36.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-7109" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":169,"skipped":2923,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:51:36.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
May 14 11:51:40.951: INFO: Successfully updated pod "pod-update-68d9696c-e891-476c-b5dd-4b21bb116c82"
STEP: verifying the updated pod is in kubernetes
May 14 11:51:40.979: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:51:40.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5628" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":170,"skipped":2930,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:51:40.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 14 11:51:41.820: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 14 11:51:43.850: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053901, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053901, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053901, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053901, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 14 11:51:45.853: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053901, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053901, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053901, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053901, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 14 11:51:48.883: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:51:49.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2304" for this suite.
STEP: Destroying namespace "webhook-2304-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.917 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":171,"skipped":2939,"failed":0}
SSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:51:49.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
May 14 11:51:50.444: INFO: Pod name pod-release: Found 0 pods out of 1
May 14 11:51:55.447: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:51:55.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2004" for this suite.

• [SLOW TEST:5.692 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":172,"skipped":2943,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:51:55.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on tmpfs
May 14 11:51:55.703: INFO: Waiting up to 5m0s for pod "pod-58fff783-459b-4a2b-843d-9f90b83ef1f6" in namespace "emptydir-1929" to be "Succeeded or Failed"
May 14 11:51:55.740: INFO: Pod "pod-58fff783-459b-4a2b-843d-9f90b83ef1f6": Phase="Pending", Reason="", readiness=false. Elapsed: 36.73996ms
May 14 11:51:57.743: INFO: Pod "pod-58fff783-459b-4a2b-843d-9f90b83ef1f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0402094s
May 14 11:51:59.909: INFO: Pod "pod-58fff783-459b-4a2b-843d-9f90b83ef1f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.205565761s
STEP: Saw pod success
May 14 11:51:59.909: INFO: Pod "pod-58fff783-459b-4a2b-843d-9f90b83ef1f6" satisfied condition "Succeeded or Failed"
May 14 11:51:59.912: INFO: Trying to get logs from node kali-worker2 pod pod-58fff783-459b-4a2b-843d-9f90b83ef1f6 container test-container: 
STEP: delete the pod
May 14 11:52:00.098: INFO: Waiting for pod pod-58fff783-459b-4a2b-843d-9f90b83ef1f6 to disappear
May 14 11:52:00.129: INFO: Pod pod-58fff783-459b-4a2b-843d-9f90b83ef1f6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:52:00.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1929" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":173,"skipped":2950,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:52:00.227: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 14 11:52:00.305: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9455'
May 14 11:52:00.686: INFO: stderr: ""
May 14 11:52:00.686: INFO: stdout: "replicationcontroller/agnhost-master created\n"
May 14 11:52:00.687: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9455'
May 14 11:52:01.054: INFO: stderr: ""
May 14 11:52:01.054: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
May 14 11:52:02.113: INFO: Selector matched 1 pods for map[app:agnhost]
May 14 11:52:02.113: INFO: Found 0 / 1
May 14 11:52:03.101: INFO: Selector matched 1 pods for map[app:agnhost]
May 14 11:52:03.101: INFO: Found 0 / 1
May 14 11:52:04.334: INFO: Selector matched 1 pods for map[app:agnhost]
May 14 11:52:04.334: INFO: Found 0 / 1
May 14 11:52:05.256: INFO: Selector matched 1 pods for map[app:agnhost]
May 14 11:52:05.256: INFO: Found 0 / 1
May 14 11:52:06.058: INFO: Selector matched 1 pods for map[app:agnhost]
May 14 11:52:06.058: INFO: Found 1 / 1
May 14 11:52:06.058: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
May 14 11:52:06.061: INFO: Selector matched 1 pods for map[app:agnhost]
May 14 11:52:06.061: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
May 14 11:52:06.061: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe pod agnhost-master-lnfqw --namespace=kubectl-9455'
May 14 11:52:06.179: INFO: stderr: ""
May 14 11:52:06.179: INFO: stdout: "Name:         agnhost-master-lnfqw\nNamespace:    kubectl-9455\nPriority:     0\nNode:         kali-worker2/172.17.0.18\nStart Time:   Thu, 14 May 2020 11:52:00 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.244.1.64\nIPs:\n  IP:           10.244.1.64\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://d5776e686ca7a15a5a40258406898a86f52fcedf4971337a8e449179055b82c9\n    Image:          us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Image ID:       us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Thu, 14 May 2020 11:52:04 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-t7kn5 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-t7kn5:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-t7kn5\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                   Message\n  ----    ------     ----  ----                   -------\n  Normal  Scheduled  6s    default-scheduler      Successfully assigned kubectl-9455/agnhost-master-lnfqw to kali-worker2\n  Normal  Pulled     3s    kubelet, kali-worker2  Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n  Normal  Created    2s    kubelet, kali-worker2  Created container agnhost-master\n  Normal  Started    1s    kubelet, kali-worker2  Started container agnhost-master\n"
May 14 11:52:06.179: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-9455'
May 14 11:52:06.283: INFO: stderr: ""
May 14 11:52:06.283: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-9455\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  6s    replication-controller  Created pod: agnhost-master-lnfqw\n"
May 14 11:52:06.283: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-9455'
May 14 11:52:06.385: INFO: stderr: ""
May 14 11:52:06.385: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-9455\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.102.58.216\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.1.64:6379\nSession Affinity:  None\nEvents:            \n"
May 14 11:52:06.388: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe node kali-control-plane'
May 14 11:52:06.509: INFO: stderr: ""
May 14 11:52:06.509: INFO: stdout: "Name:               kali-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=kali-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Wed, 29 Apr 2020 09:30:59 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  kali-control-plane\n  AcquireTime:     \n  RenewTime:       Thu, 14 May 2020 11:52:04 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Thu, 14 May 2020 11:47:45 +0000   Wed, 29 Apr 2020 09:30:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Thu, 14 May 2020 11:47:45 +0000   Wed, 29 Apr 2020 09:30:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Thu, 14 May 2020 11:47:45 +0000   Wed, 29 Apr 2020 09:30:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Thu, 14 May 2020 11:47:45 +0000   Wed, 29 Apr 2020 09:31:34 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.17.0.19\n  Hostname:    kali-control-plane\nCapacity:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759892Ki\n  pods:               110\nAllocatable:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759892Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 2146cf85bed648199604ab2e0e9ac609\n  System UUID:                e83c0db4-babe-44fc-9dad-b5eeae6d23fd\n  Boot ID:                    ca2aa731-f890-4956-92a1-ff8c7560d571\n  Kernel Version:             4.15.0-88-generic\n  OS Image:                   Ubuntu 19.10\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.3-14-g449e9269\n  Kubelet Version:            v1.18.2\n  Kube-Proxy Version:         v1.18.2\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-66bff467f8-rvq2k                      100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     15d\n  kube-system                 coredns-66bff467f8-w6zxd                      100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     15d\n  kube-system                 etcd-kali-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         15d\n  kube-system                 kindnet-65djz                                 100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      15d\n  kube-system                 kube-apiserver-kali-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         15d\n  kube-system                 kube-controller-manager-kali-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         15d\n  kube-system                 kube-proxy-pnhtq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         15d\n  kube-system                 kube-scheduler-kali-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         15d\n  local-path-storage          local-path-provisioner-bd4bb6b75-6l9ph        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15d\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\n  hugepages-1Gi      0 (0%)      0 (0%)\n  hugepages-2Mi      0 (0%)      0 (0%)\nEvents:              \n"
May 14 11:52:06.509: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe namespace kubectl-9455'
May 14 11:52:06.620: INFO: stderr: ""
May 14 11:52:06.620: INFO: stdout: "Name:         kubectl-9455\nLabels:       e2e-framework=kubectl\n              e2e-run=1b0ac4e6-8aa7-4483-a338-95453b346736\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:52:06.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9455" for this suite.

• [SLOW TEST:6.400 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:978
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":275,"completed":174,"skipped":2964,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:52:06.627: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206
STEP: creating the pod
May 14 11:52:06.679: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6263'
May 14 11:52:06.968: INFO: stderr: ""
May 14 11:52:06.968: INFO: stdout: "pod/pause created\n"
May 14 11:52:06.968: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
May 14 11:52:06.968: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6263" to be "running and ready"
May 14 11:52:06.973: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.612443ms
May 14 11:52:08.978: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010077205s
May 14 11:52:10.983: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.014613067s
May 14 11:52:10.983: INFO: Pod "pause" satisfied condition "running and ready"
May 14 11:52:10.983: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: adding the label testing-label with value testing-label-value to a pod
May 14 11:52:10.983: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6263'
May 14 11:52:11.096: INFO: stderr: ""
May 14 11:52:11.096: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
May 14 11:52:11.096: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6263'
May 14 11:52:11.209: INFO: stderr: ""
May 14 11:52:11.209: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          5s    testing-label-value\n"
STEP: removing the label testing-label of a pod
May 14 11:52:11.209: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6263'
May 14 11:52:11.321: INFO: stderr: ""
May 14 11:52:11.322: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
May 14 11:52:11.322: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6263'
May 14 11:52:11.455: INFO: stderr: ""
May 14 11:52:11.455: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          5s    \n"
[AfterEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213
STEP: using delete to clean up resources
May 14 11:52:11.455: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6263'
May 14 11:52:11.564: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 14 11:52:11.564: INFO: stdout: "pod \"pause\" force deleted\n"
May 14 11:52:11.564: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6263'
May 14 11:52:11.700: INFO: stderr: "No resources found in kubectl-6263 namespace.\n"
May 14 11:52:11.700: INFO: stdout: ""
May 14 11:52:11.701: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6263 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May 14 11:52:12.008: INFO: stderr: ""
May 14 11:52:12.008: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:52:12.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6263" for this suite.

• [SLOW TEST:5.633 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":275,"completed":175,"skipped":2966,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:52:12.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 14 11:52:13.586: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 14 11:52:15.596: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053933, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053933, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053933, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053933, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 14 11:52:17.663: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053933, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053933, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053933, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725053933, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 14 11:52:20.687: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:52:21.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3586" for this suite.
STEP: Destroying namespace "webhook-3586-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.152 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":176,"skipped":2993,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:52:21.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 14 11:52:21.466: INFO: Creating deployment "webserver-deployment"
May 14 11:52:21.482: INFO: Waiting for observed generation 1
May 14 11:52:23.932: INFO: Waiting for all required pods to come up
May 14 11:52:23.936: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
May 14 11:52:36.304: INFO: Waiting for deployment "webserver-deployment" to complete
May 14 11:52:36.310: INFO: Updating deployment "webserver-deployment" with a non-existent image
May 14 11:52:36.318: INFO: Updating deployment webserver-deployment
May 14 11:52:36.318: INFO: Waiting for observed generation 2
May 14 11:52:38.360: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
May 14 11:52:38.362: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
May 14 11:52:38.365: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
May 14 11:52:38.372: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
May 14 11:52:38.372: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
May 14 11:52:38.374: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
May 14 11:52:38.378: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
May 14 11:52:38.378: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
May 14 11:52:38.384: INFO: Updating deployment webserver-deployment
May 14 11:52:38.384: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
May 14 11:52:39.346: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
May 14 11:52:39.941: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
May 14 11:52:40.403: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-4394 /apis/apps/v1/namespaces/deployment-4394/deployments/webserver-deployment f600c03e-746d-48b1-9236-2f371e6ec31d 4283382 3 2020-05-14 11:52:21 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-05-14 11:52:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-14 11:52:40 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003b40f58  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:25,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-05-14 11:52:37 +0000 UTC,LastTransitionTime:2020-05-14 11:52:21 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-14 11:52:38 +0000 UTC,LastTransitionTime:2020-05-14 11:52:38 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

May 14 11:52:40.454: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4  deployment-4394 /apis/apps/v1/namespaces/deployment-4394/replicasets/webserver-deployment-6676bcd6d4 6699263e-8c47-4719-93dc-940460c68c7d 4283392 3 2020-05-14 11:52:36 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment f600c03e-746d-48b1-9236-2f371e6ec31d 0xc003b41407 0xc003b41408}] []  [{kube-controller-manager Update apps/v1 2020-05-14 11:52:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 54 48 48 99 48 51 101 45 55 52 54 100 45 52 56 98 49 45 57 50 51 54 45 50 102 51 55 49 101 54 101 99 51 49 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003b41488  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May 14 11:52:40.454: INFO: All old ReplicaSets of Deployment "webserver-deployment":
May 14 11:52:40.454: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797  deployment-4394 /apis/apps/v1/namespaces/deployment-4394/replicasets/webserver-deployment-84855cf797 6b8d9b21-3d7d-4888-82d2-006a5cdb7e87 4283375 3 2020-05-14 11:52:21 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment f600c03e-746d-48b1-9236-2f371e6ec31d 0xc003b414e7 0xc003b414e8}] []  [{kube-controller-manager Update apps/v1 2020-05-14 11:52:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 54 48 48 99 48 51 101 45 55 52 54 100 45 52 56 98 49 45 57 50 51 54 45 50 102 51 55 49 101 54 101 99 51 49 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003b41558  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
May 14 11:52:40.651: INFO: Pod "webserver-deployment-6676bcd6d4-25gj6" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-25gj6 webserver-deployment-6676bcd6d4- deployment-4394 /api/v1/namespaces/deployment-4394/pods/webserver-deployment-6676bcd6d4-25gj6 78ff8df1-f8ff-48f5-838e-71e880bdd83c 4283362 0 2020-05-14 11:52:39 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 6699263e-8c47-4719-93dc-940460c68c7d 0xc000de3747 0xc000de3748}] []  [{kube-controller-manager Update v1 2020-05-14 11:52:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 54 57 57 50 54 51 101 45 56 99 52 55 45 52 55 49 57 45 57 51 100 99 45 57 52 48 52 54 48 99 54 56 99 55 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fp86q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fp86q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fp86q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 14 11:52:40.651: INFO: Pod "webserver-deployment-6676bcd6d4-5cmcw" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-5cmcw webserver-deployment-6676bcd6d4- deployment-4394 /api/v1/namespaces/deployment-4394/pods/webserver-deployment-6676bcd6d4-5cmcw f94757a4-075e-44dc-be78-3822818a110c 4283366 0 2020-05-14 11:52:39 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 6699263e-8c47-4719-93dc-940460c68c7d 0xc00512e077 0xc00512e078}] []  [{kube-controller-manager Update v1 2020-05-14 11:52:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 54 57 57 50 54 51 101 45 56 99 52 55 45 52 55 49 57 45 57 51 100 99 45 57 52 48 52 54 48 99 54 56 99 55 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fp86q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fp86q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fp86q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 14 11:52:40.651: INFO: Pod "webserver-deployment-6676bcd6d4-6qrhp" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-6qrhp webserver-deployment-6676bcd6d4- deployment-4394 /api/v1/namespaces/deployment-4394/pods/webserver-deployment-6676bcd6d4-6qrhp f69aa166-ea67-44e0-ba09-38d82221a93a 4283291 0 2020-05-14 11:52:36 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 6699263e-8c47-4719-93dc-940460c68c7d 0xc00512e1c7 0xc00512e1c8}] []  [{kube-controller-manager Update v1 2020-05-14 11:52:36 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 54 57 57 50 54 51 101 45 56 99 52 55 45 52 55 49 57 45 57 51 100 99 45 57 52 48 52 54 48 99 54 56 99 55 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-14 11:52:36 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fp86q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fp86q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fp86q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-14 11:52:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 14 11:52:40.652: INFO: Pod "webserver-deployment-6676bcd6d4-9wkm7" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-9wkm7 webserver-deployment-6676bcd6d4- deployment-4394 /api/v1/namespaces/deployment-4394/pods/webserver-deployment-6676bcd6d4-9wkm7 981d5437-a787-4027-a4e5-9cccc500c9ed 4283393 0 2020-05-14 11:52:39 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 6699263e-8c47-4719-93dc-940460c68c7d 0xc00512e377 0xc00512e378}] []  [{kube-controller-manager Update v1 2020-05-14 11:52:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 54 57 57 50 54 51 101 45 56 99 52 55 45 52 55 49 57 45 57 51 100 99 45 57 52 48 52 54 48 99 54 56 99 55 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-14 11:52:40 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fp86q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fp86q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fp86q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-14 11:52:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 14 11:52:40.652: INFO: Pod "webserver-deployment-6676bcd6d4-dfp6p" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-dfp6p webserver-deployment-6676bcd6d4- deployment-4394 /api/v1/namespaces/deployment-4394/pods/webserver-deployment-6676bcd6d4-dfp6p 9b58284d-c42a-4030-9c8e-d998981a8b24 4283284 0 2020-05-14 11:52:36 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 6699263e-8c47-4719-93dc-940460c68c7d 0xc00512e527 0xc00512e528}] []  [{kube-controller-manager Update v1 2020-05-14 11:52:36 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 54 57 57 50 54 51 101 45 56 99 52 55 45 52 55 49 57 45 57 51 100 99 45 57 52 48 52 54 48 99 54 56 99 55 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-14 11:52:36 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fp86q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fp86q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fp86q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-14 11:52:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 14 11:52:40.652: INFO: Pod "webserver-deployment-6676bcd6d4-dqcqt" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-dqcqt webserver-deployment-6676bcd6d4- deployment-4394 /api/v1/namespaces/deployment-4394/pods/webserver-deployment-6676bcd6d4-dqcqt 409bd2f3-a7c6-4b7d-8218-d8bb68748936 4283372 0 2020-05-14 11:52:39 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 6699263e-8c47-4719-93dc-940460c68c7d 0xc00512e6d7 0xc00512e6d8}] []  [{kube-controller-manager Update v1 2020-05-14 11:52:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 54 57 57 50 54 51 101 45 56 99 52 55 45 52 55 49 57 45 57 51 100 99 45 57 52 48 52 54 48 99 54 56 99 55 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fp86q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fp86q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fp86q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 14 11:52:40.653: INFO: Pod "webserver-deployment-6676bcd6d4-k5sdm" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-k5sdm webserver-deployment-6676bcd6d4- deployment-4394 /api/v1/namespaces/deployment-4394/pods/webserver-deployment-6676bcd6d4-k5sdm 1b6f5660-44e4-4c4c-8e30-cf7fc8bf88dd 4283311 0 2020-05-14 11:52:36 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 6699263e-8c47-4719-93dc-940460c68c7d 0xc00512e817 0xc00512e818}] []  [{kube-controller-manager Update v1 2020-05-14 11:52:36 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 54 57 57 50 54 51 101 45 56 99 52 55 45 52 55 49 57 45 57 51 100 99 45 57 52 48 52 54 48 99 54 56 99 55 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-14 11:52:37 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fp86q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fp86q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fp86q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-14 11:52:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 14 11:52:40.653: INFO: Pod "webserver-deployment-6676bcd6d4-pfq5f" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-pfq5f webserver-deployment-6676bcd6d4- deployment-4394 /api/v1/namespaces/deployment-4394/pods/webserver-deployment-6676bcd6d4-pfq5f 7dd155bd-1466-42e7-af0f-a0918d788819 4283309 0 2020-05-14 11:52:36 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 6699263e-8c47-4719-93dc-940460c68c7d 0xc00512e9c7 0xc00512e9c8}] []  [{kube-controller-manager Update v1 2020-05-14 11:52:36 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 54 57 57 50 54 51 101 45 56 99 52 55 45 52 55 49 57 45 57 51 100 99 45 57 52 48 52 54 48 99 54 56 99 55 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-14 11:52:36 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fp86q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fp86q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fp86q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-14 11:52:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 14 11:52:40.654: INFO: Pod "webserver-deployment-6676bcd6d4-qjh7p" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-qjh7p webserver-deployment-6676bcd6d4- deployment-4394 /api/v1/namespaces/deployment-4394/pods/webserver-deployment-6676bcd6d4-qjh7p 59065a12-1619-4cc9-9016-0cc8bd4bfcaf 4283374 0 2020-05-14 11:52:39 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 6699263e-8c47-4719-93dc-940460c68c7d 0xc00512eb77 0xc00512eb78}] []  [{kube-controller-manager Update v1 2020-05-14 11:52:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 54 57 57 50 54 51 101 45 56 99 52 55 45 52 55 49 57 45 57 51 100 99 45 57 52 48 52 54 48 99 54 56 99 55 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fp86q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fp86q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fp86q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 14 11:52:40.654: INFO: Pod "webserver-deployment-6676bcd6d4-qpktr" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-qpktr webserver-deployment-6676bcd6d4- deployment-4394 /api/v1/namespaces/deployment-4394/pods/webserver-deployment-6676bcd6d4-qpktr f367f390-9b00-4cda-992e-e8f054bce757 4283297 0 2020-05-14 11:52:36 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 6699263e-8c47-4719-93dc-940460c68c7d 0xc00512ecb7 0xc00512ecb8}] []  [{kube-controller-manager Update v1 2020-05-14 11:52:36 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 54 57 57 50 54 51 101 45 56 99 52 55 45 52 55 49 57 45 57 51 100 99 45 57 52 48 52 54 48 99 54 56 99 55 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-14 11:52:36 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fp86q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fp86q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fp86q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-14 11:52:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 14 11:52:40.654: INFO: Pod "webserver-deployment-6676bcd6d4-sg2tx" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-sg2tx webserver-deployment-6676bcd6d4- deployment-4394 /api/v1/namespaces/deployment-4394/pods/webserver-deployment-6676bcd6d4-sg2tx ed5b3249-3996-407a-a233-056164967350 4283389 0 2020-05-14 11:52:38 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 6699263e-8c47-4719-93dc-940460c68c7d 0xc00512ee67 0xc00512ee68}] []  [{kube-controller-manager Update v1 2020-05-14 11:52:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 54 57 57 50 54 51 101 45 56 99 52 55 45 52 55 49 57 45 57 51 100 99 45 57 52 48 52 54 48 99 54 56 99 55 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-14 11:52:40 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fp86q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fp86q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fp86q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-14 11:52:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 14 11:52:40.654: INFO: Pod "webserver-deployment-6676bcd6d4-xwdwd" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-xwdwd webserver-deployment-6676bcd6d4- deployment-4394 /api/v1/namespaces/deployment-4394/pods/webserver-deployment-6676bcd6d4-xwdwd e0b2e0ee-ee39-4609-a570-fa19afa7c805 4283359 0 2020-05-14 11:52:39 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 6699263e-8c47-4719-93dc-940460c68c7d 0xc00512f567 0xc00512f568}] []  [{kube-controller-manager Update v1 2020-05-14 11:52:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 54 57 57 50 54 51 101 45 56 99 52 55 45 52 55 49 57 45 57 51 100 99 45 57 52 48 52 54 48 99 54 56 99 55 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fp86q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fp86q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fp86q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 14 11:52:40.655: INFO: Pod "webserver-deployment-6676bcd6d4-z7h8p" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-z7h8p webserver-deployment-6676bcd6d4- deployment-4394 /api/v1/namespaces/deployment-4394/pods/webserver-deployment-6676bcd6d4-z7h8p de98b494-be00-41ab-a913-8e94f99e8bce 4283368 0 2020-05-14 11:52:39 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 6699263e-8c47-4719-93dc-940460c68c7d 0xc00512f6a7 0xc00512f6a8}] []  [{kube-controller-manager Update v1 2020-05-14 11:52:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 54 57 57 50 54 51 101 45 56 99 52 55 45 52 55 49 57 45 57 51 100 99 45 57 52 48 52 54 48 99 54 56 99 55 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fp86q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fp86q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fp86q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 14 11:52:40.655: INFO: Pod "webserver-deployment-84855cf797-44trg" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-44trg webserver-deployment-84855cf797- deployment-4394 /api/v1/namespaces/deployment-4394/pods/webserver-deployment-84855cf797-44trg 6e9c3bfb-fcd7-47fa-bd8a-982e85ec309d 4283337 0 2020-05-14 11:52:38 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6b8d9b21-3d7d-4888-82d2-006a5cdb7e87 0xc00512f7e7 0xc00512f7e8}] []  [{kube-controller-manager Update v1 2020-05-14 11:52:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 98 56 100 57 98 50 49 45 51 100 55 100 45 52 56 56 56 45 56 50 100 50 45 48 48 54 97 53 99 100 98 55 101 56 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fp86q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fp86q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fp86q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 14 11:52:40.655: INFO: Pod "webserver-deployment-84855cf797-68z4j" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-68z4j webserver-deployment-84855cf797- deployment-4394 /api/v1/namespaces/deployment-4394/pods/webserver-deployment-84855cf797-68z4j d4ff7fa9-cd94-4e63-bc4d-b417af5126d6 4283377 0 2020-05-14 11:52:38 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6b8d9b21-3d7d-4888-82d2-006a5cdb7e87 0xc00512f917 0xc00512f918}] []  [{kube-controller-manager Update v1 2020-05-14 11:52:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 98 56 100 57 98 50 49 45 51 100 55 100 45 52 56 56 56 45 56 50 100 50 45 48 48 54 97 53 99 100 98 55 101 56 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-14 11:52:40 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fp86q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fp86q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fp86q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-14 11:52:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 14 11:52:40.655: INFO: Pod "webserver-deployment-84855cf797-7msgv" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-7msgv webserver-deployment-84855cf797- deployment-4394 /api/v1/namespaces/deployment-4394/pods/webserver-deployment-84855cf797-7msgv d5377f6d-a22f-4cd3-a458-f7767f398b0c 4283197 0 2020-05-14 11:52:21 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6b8d9b21-3d7d-4888-82d2-006a5cdb7e87 0xc00512faa7 0xc00512faa8}] []  [{kube-controller-manager Update v1 2020-05-14 11:52:21 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 98 56 100 57 98 50 49 45 51 100 55 100 45 52 56 56 56 45 56 50 100 50 45 48 48 54 97 53 99 100 98 55 101 56 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-14 11:52:32 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 54 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fp86q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fp86q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fp86q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.68,StartTime:2020-05-14 11:52:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-14 11:52:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://46c195b13e8b00d8a3bbf7279dd93e197214c5884178b7b2f8e4105344b78738,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.68,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 14 11:52:40.656: INFO: Pod "webserver-deployment-84855cf797-bq96m" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-bq96m webserver-deployment-84855cf797- deployment-4394 /api/v1/namespaces/deployment-4394/pods/webserver-deployment-84855cf797-bq96m eacaa92e-5e96-4a8e-98e8-e73ffe3b8fef 4283251 0 2020-05-14 11:52:21 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6b8d9b21-3d7d-4888-82d2-006a5cdb7e87 0xc00512fc57 0xc00512fc58}] []  [{kube-controller-manager Update v1 2020-05-14 11:52:21 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 98 56 100 57 98 50 49 45 51 100 55 100 45 52 56 56 56 45 56 50 100 50 45 48 48 54 97 53 99 100 98 55 101 56 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-14 11:52:35 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 56 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fp86q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fp86q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fp86q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.185,StartTime:2020-05-14 11:52:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-14 11:52:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a58fffeda99a3425011b2943ae1c0c2844eb2e799790a5f48a34df9a9af0e024,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.185,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 14 11:52:40.656: INFO: Pod "webserver-deployment-84855cf797-bvbf4" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-bvbf4 webserver-deployment-84855cf797- deployment-4394 /api/v1/namespaces/deployment-4394/pods/webserver-deployment-84855cf797-bvbf4 52cf6644-ffaf-4688-9607-3551e146c52b 4283357 0 2020-05-14 11:52:39 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6b8d9b21-3d7d-4888-82d2-006a5cdb7e87 0xc00512fe07 0xc00512fe08}] []  [{kube-controller-manager Update v1 2020-05-14 11:52:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 98 56 100 57 98 50 49 45 51 100 55 100 45 52 56 56 56 45 56 50 100 50 45 48 48 54 97 53 99 100 98 55 101 56 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fp86q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fp86q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fp86q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 14 11:52:40.656: INFO: Pod "webserver-deployment-84855cf797-cmxmt" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-cmxmt webserver-deployment-84855cf797- deployment-4394 /api/v1/namespaces/deployment-4394/pods/webserver-deployment-84855cf797-cmxmt 80f3be1c-11dc-41ed-91e5-0ddcd70ffa17 4283371 0 2020-05-14 11:52:39 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6b8d9b21-3d7d-4888-82d2-006a5cdb7e87 0xc00512ff37 0xc00512ff38}] []  [{kube-controller-manager Update v1 2020-05-14 11:52:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 98 56 100 57 98 50 49 45 51 100 55 100 45 52 56 56 56 45 56 50 100 50 45 48 48 54 97 53 99 100 98 55 101 56 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fp86q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fp86q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fp86q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 14 11:52:40.657: INFO: Pod "webserver-deployment-84855cf797-d8fxd" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-d8fxd webserver-deployment-84855cf797- deployment-4394 /api/v1/namespaces/deployment-4394/pods/webserver-deployment-84855cf797-d8fxd 2dd04d2d-0013-43fa-9d5b-9a5d1c0ec640 4283244 0 2020-05-14 11:52:21 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6b8d9b21-3d7d-4888-82d2-006a5cdb7e87 0xc001c4a077 0xc001c4a078}] []  [{kube-controller-manager Update v1 2020-05-14 11:52:21 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 98 56 100 57 98 50 49 45 51 100 55 100 45 52 56 56 56 45 56 50 100 50 45 48 48 54 97 53 99 100 98 55 101 56 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-14 11:52:35 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 56 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fp86q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fp86q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fp86q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.184,StartTime:2020-05-14 11:52:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-14 11:52:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://769da20145fbe11893562ab89bda85074ce06130c1f354517a5ba95f130e6475,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.184,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 14 11:52:40.657: INFO: Pod "webserver-deployment-84855cf797-hnb9s" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-hnb9s webserver-deployment-84855cf797- deployment-4394 /api/v1/namespaces/deployment-4394/pods/webserver-deployment-84855cf797-hnb9s ae8fc7b7-fc49-4443-a0a3-56b6f4be20ca 4283383 0 2020-05-14 11:52:38 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6b8d9b21-3d7d-4888-82d2-006a5cdb7e87 0xc001c4a2d7 0xc001c4a2d8}] []  [{kube-controller-manager Update v1 2020-05-14 11:52:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 98 56 100 57 98 50 49 45 51 100 55 100 45 52 56 56 56 45 56 50 100 50 45 48 48 54 97 53 99 100 98 55 101 56 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-14 11:52:40 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fp86q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fp86q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fp86q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-14 11:52:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 14 11:52:40.657: INFO: Pod "webserver-deployment-84855cf797-lhprz" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-lhprz webserver-deployment-84855cf797- deployment-4394 /api/v1/namespaces/deployment-4394/pods/webserver-deployment-84855cf797-lhprz 9df48674-79ba-4774-9c5d-9df532d71143 4283370 0 2020-05-14 11:52:39 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6b8d9b21-3d7d-4888-82d2-006a5cdb7e87 0xc001c4a7f7 0xc001c4a7f8}] []  [{kube-controller-manager Update v1 2020-05-14 11:52:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 98 56 100 57 98 50 49 45 51 100 55 100 45 52 56 56 56 45 56 50 100 50 45 48 48 54 97 53 99 100 98 55 101 56 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fp86q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fp86q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fp86q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 14 11:52:40.658: INFO: Pod "webserver-deployment-84855cf797-mvgsg" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-mvgsg webserver-deployment-84855cf797- deployment-4394 /api/v1/namespaces/deployment-4394/pods/webserver-deployment-84855cf797-mvgsg 97817c23-6134-426b-afe1-291b4b0b24a7 4283369 0 2020-05-14 11:52:39 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6b8d9b21-3d7d-4888-82d2-006a5cdb7e87 0xc001c4a937 0xc001c4a938}] []  [{kube-controller-manager Update v1 2020-05-14 11:52:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 98 56 100 57 98 50 49 45 51 100 55 100 45 52 56 56 56 45 56 50 100 50 45 48 48 54 97 53 99 100 98 55 101 56 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fp86q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fp86q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fp86q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 14 11:52:40.658: INFO: Pod "webserver-deployment-84855cf797-mvw54" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-mvw54 webserver-deployment-84855cf797- deployment-4394 /api/v1/namespaces/deployment-4394/pods/webserver-deployment-84855cf797-mvw54 27097a7d-9d8d-4d0f-a84f-4f6d7927d035 4283203 0 2020-05-14 11:52:21 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6b8d9b21-3d7d-4888-82d2-006a5cdb7e87 0xc001c4ab57 0xc001c4ab58}] []  [{kube-controller-manager Update v1 2020-05-14 11:52:21 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 98 56 100 57 98 50 49 45 51 100 55 100 45 52 56 56 56 45 56 50 100 50 45 48 48 54 97 53 99 100 98 55 101 56 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-14 11:52:33 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 54 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fp86q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fp86q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fp86q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.66,StartTime:2020-05-14 11:52:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-14 11:52:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b5c1984fa7a1684f7c20eabc7c4be670c1f7435a334b93a620b0555ee71d87fc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.66,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 14 11:52:40.658: INFO: Pod "webserver-deployment-84855cf797-nflbc" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-nflbc webserver-deployment-84855cf797- deployment-4394 /api/v1/namespaces/deployment-4394/pods/webserver-deployment-84855cf797-nflbc e701f72b-24d6-4f68-a5e2-e2e511aab9c0 4283356 0 2020-05-14 11:52:39 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6b8d9b21-3d7d-4888-82d2-006a5cdb7e87 0xc001c4ad07 0xc001c4ad08}] []  [{kube-controller-manager Update v1 2020-05-14 11:52:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 98 56 100 57 98 50 49 45 51 100 55 100 45 52 56 56 56 45 56 50 100 50 45 48 48 54 97 53 99 100 98 55 101 56 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fp86q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fp86q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fp86q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 14 11:52:40.658: INFO: Pod "webserver-deployment-84855cf797-np8kp" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-np8kp webserver-deployment-84855cf797- deployment-4394 /api/v1/namespaces/deployment-4394/pods/webserver-deployment-84855cf797-np8kp 401b6891-34e7-4d9d-b9ff-acb2eb4ac9e2 4283358 0 2020-05-14 11:52:39 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6b8d9b21-3d7d-4888-82d2-006a5cdb7e87 0xc001c4ae77 0xc001c4ae78}] []  [{kube-controller-manager Update v1 2020-05-14 11:52:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 98 56 100 57 98 50 49 45 51 100 55 100 45 52 56 56 56 45 56 50 100 50 45 48 48 54 97 53 99 100 98 55 101 56 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fp86q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fp86q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fp86q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 14 11:52:40.658: INFO: Pod "webserver-deployment-84855cf797-nxkpd" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-nxkpd webserver-deployment-84855cf797- deployment-4394 /api/v1/namespaces/deployment-4394/pods/webserver-deployment-84855cf797-nxkpd bf6bcb8b-570e-4dd0-bd1d-c06407258714 4283367 0 2020-05-14 11:52:39 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6b8d9b21-3d7d-4888-82d2-006a5cdb7e87 0xc001c4b067 0xc001c4b068}] []  [{kube-controller-manager Update v1 2020-05-14 11:52:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 98 56 100 57 98 50 49 45 51 100 55 100 45 52 56 56 56 45 56 50 100 50 45 48 48 54 97 53 99 100 98 55 101 56 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fp86q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fp86q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fp86q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 14 11:52:40.659: INFO: Pod "webserver-deployment-84855cf797-q6zrh" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-q6zrh webserver-deployment-84855cf797- deployment-4394 /api/v1/namespaces/deployment-4394/pods/webserver-deployment-84855cf797-q6zrh 882b7c0b-0a3f-4245-ad97-1aa133dc1c76 4283231 0 2020-05-14 11:52:21 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6b8d9b21-3d7d-4888-82d2-006a5cdb7e87 0xc001c4b247 0xc001c4b248}] []  [{kube-controller-manager Update v1 2020-05-14 11:52:21 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 98 56 100 57 98 50 49 45 51 100 55 100 45 52 56 56 56 45 56 50 100 50 45 48 48 54 97 53 99 100 98 55 101 56 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-14 11:52:34 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 54 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fp86q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fp86q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fp86q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.69,StartTime:2020-05-14 11:52:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-14 11:52:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9cc816d15668ebe88a9671a393a1c1e22c08629f85b695b375c221d715d2fbbf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.69,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 14 11:52:40.659: INFO: Pod "webserver-deployment-84855cf797-rdvfd" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-rdvfd webserver-deployment-84855cf797- deployment-4394 /api/v1/namespaces/deployment-4394/pods/webserver-deployment-84855cf797-rdvfd 386a0460-2bb4-4514-83ff-cd53d6e5b423 4283365 0 2020-05-14 11:52:39 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6b8d9b21-3d7d-4888-82d2-006a5cdb7e87 0xc001c4b417 0xc001c4b418}] []  [{kube-controller-manager Update v1 2020-05-14 11:52:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 98 56 100 57 98 50 49 45 51 100 55 100 45 52 56 56 56 45 56 50 100 50 45 48 48 54 97 53 99 100 98 55 101 56 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fp86q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fp86q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fp86q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 14 11:52:40.659: INFO: Pod "webserver-deployment-84855cf797-th58m" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-th58m webserver-deployment-84855cf797- deployment-4394 /api/v1/namespaces/deployment-4394/pods/webserver-deployment-84855cf797-th58m 5b6c597d-b659-4af5-b920-c526be14e53e 4283224 0 2020-05-14 11:52:21 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6b8d9b21-3d7d-4888-82d2-006a5cdb7e87 0xc001c4b557 0xc001c4b558}] []  [{kube-controller-manager Update v1 2020-05-14 11:52:21 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 98 56 100 57 98 50 49 45 51 100 55 100 45 52 56 56 56 45 56 50 100 50 45 48 48 54 97 53 99 100 98 55 101 56 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-14 11:52:34 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 56 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fp86q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fp86q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fp86q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.180,StartTime:2020-05-14 11:52:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-14 11:52:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e4ff36f8328a25791257f36d6b2e73e3376fe55d9a95f3409274e170e589d577,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.180,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 14 11:52:40.659: INFO: Pod "webserver-deployment-84855cf797-tsk74" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-tsk74 webserver-deployment-84855cf797- deployment-4394 /api/v1/namespaces/deployment-4394/pods/webserver-deployment-84855cf797-tsk74 eeb7c93b-9118-4fbd-947e-e7d8de74c526 4283210 0 2020-05-14 11:52:21 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6b8d9b21-3d7d-4888-82d2-006a5cdb7e87 0xc001c4b727 0xc001c4b728}] []  [{kube-controller-manager Update v1 2020-05-14 11:52:21 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 98 56 100 57 98 50 49 45 51 100 55 100 45 52 56 56 56 45 56 50 100 50 45 48 48 54 97 53 99 100 98 55 101 56 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-14 11:52:33 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 54 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fp86q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fp86q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fp86q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.67,StartTime:2020-05-14 11:52:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-14 11:52:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://cb33725216ca5b31a12bb98501261f3bfe9520656debbccb8c3e73295c01a258,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.67,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 14 11:52:40.659: INFO: Pod "webserver-deployment-84855cf797-wjsb5" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-wjsb5 webserver-deployment-84855cf797- deployment-4394 /api/v1/namespaces/deployment-4394/pods/webserver-deployment-84855cf797-wjsb5 26d79f24-d3bd-468a-a5da-53bd5abc41bf 4283216 0 2020-05-14 11:52:21 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6b8d9b21-3d7d-4888-82d2-006a5cdb7e87 0xc001c4b927 0xc001c4b928}] []  [{kube-controller-manager Update v1 2020-05-14 11:52:21 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 98 56 100 57 98 50 49 45 51 100 55 100 45 52 56 56 56 45 56 50 100 50 45 48 48 54 97 53 99 100 98 55 101 56 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-14 11:52:34 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 56 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fp86q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fp86q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fp86q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.181,StartTime:2020-05-14 11:52:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-14 11:52:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e46dfd9ab1aed51ef69109e6f6eb04bf1865b7f51ebc603eae5cff4bb7c29fe2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.181,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 14 11:52:40.660: INFO: Pod "webserver-deployment-84855cf797-zqsh5" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-zqsh5 webserver-deployment-84855cf797- deployment-4394 /api/v1/namespaces/deployment-4394/pods/webserver-deployment-84855cf797-zqsh5 61760ac8-a61b-40a2-aa9c-630bba68e9ee 4283351 0 2020-05-14 11:52:39 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 6b8d9b21-3d7d-4888-82d2-006a5cdb7e87 0xc001c4bad7 0xc001c4bad8}] []  [{kube-controller-manager Update v1 2020-05-14 11:52:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 98 56 100 57 98 50 49 45 51 100 55 100 45 52 56 56 56 45 56 50 100 50 45 48 48 54 97 53 99 100 98 55 101 56 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fp86q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fp86q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fp86q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 11:52:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:52:40.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4394" for this suite.

• [SLOW TEST:19.731 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":177,"skipped":3047,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:52:41.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 14 11:52:43.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
May 14 11:52:46.337: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5829 create -f -'
May 14 11:53:13.730: INFO: stderr: ""
May 14 11:53:13.731: INFO: stdout: "e2e-test-crd-publish-openapi-5576-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
May 14 11:53:13.731: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5829 delete e2e-test-crd-publish-openapi-5576-crds test-cr'
May 14 11:53:14.651: INFO: stderr: ""
May 14 11:53:14.651: INFO: stdout: "e2e-test-crd-publish-openapi-5576-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
May 14 11:53:14.651: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5829 apply -f -'
May 14 11:53:16.416: INFO: stderr: ""
May 14 11:53:16.416: INFO: stdout: "e2e-test-crd-publish-openapi-5576-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
May 14 11:53:16.416: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5829 delete e2e-test-crd-publish-openapi-5576-crds test-cr'
May 14 11:53:17.553: INFO: stderr: ""
May 14 11:53:17.553: INFO: stdout: "e2e-test-crd-publish-openapi-5576-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
May 14 11:53:17.553: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5576-crds'
May 14 11:53:18.733: INFO: stderr: ""
May 14 11:53:18.733: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5576-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:53:22.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5829" for this suite.

• [SLOW TEST:41.365 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":178,"skipped":3064,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:53:22.511: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:53:28.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4835" for this suite.

• [SLOW TEST:6.065 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":179,"skipped":3074,"failed":0}
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:53:28.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-b9781a8a-b7f5-48e9-aad2-f83978123e55
STEP: Creating a pod to test consume configMaps
May 14 11:53:29.505: INFO: Waiting up to 5m0s for pod "pod-configmaps-767051a0-1fff-417e-b90b-6c825f5d7f31" in namespace "configmap-8404" to be "Succeeded or Failed"
May 14 11:53:29.536: INFO: Pod "pod-configmaps-767051a0-1fff-417e-b90b-6c825f5d7f31": Phase="Pending", Reason="", readiness=false. Elapsed: 31.372723ms
May 14 11:53:31.540: INFO: Pod "pod-configmaps-767051a0-1fff-417e-b90b-6c825f5d7f31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034782986s
May 14 11:53:33.544: INFO: Pod "pod-configmaps-767051a0-1fff-417e-b90b-6c825f5d7f31": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038938654s
May 14 11:53:35.811: INFO: Pod "pod-configmaps-767051a0-1fff-417e-b90b-6c825f5d7f31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.306292148s
STEP: Saw pod success
May 14 11:53:35.811: INFO: Pod "pod-configmaps-767051a0-1fff-417e-b90b-6c825f5d7f31" satisfied condition "Succeeded or Failed"
May 14 11:53:35.813: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-767051a0-1fff-417e-b90b-6c825f5d7f31 container configmap-volume-test: 
STEP: delete the pod
May 14 11:53:36.190: INFO: Waiting for pod pod-configmaps-767051a0-1fff-417e-b90b-6c825f5d7f31 to disappear
May 14 11:53:36.430: INFO: Pod pod-configmaps-767051a0-1fff-417e-b90b-6c825f5d7f31 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:53:36.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8404" for this suite.

• [SLOW TEST:7.913 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":180,"skipped":3075,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:53:36.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 14 11:53:36.660: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a8db8878-c940-4f71-a635-da21aec42fe5" in namespace "downward-api-4176" to be "Succeeded or Failed"
May 14 11:53:36.712: INFO: Pod "downwardapi-volume-a8db8878-c940-4f71-a635-da21aec42fe5": Phase="Pending", Reason="", readiness=false. Elapsed: 51.629236ms
May 14 11:53:38.715: INFO: Pod "downwardapi-volume-a8db8878-c940-4f71-a635-da21aec42fe5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055199699s
May 14 11:53:40.720: INFO: Pod "downwardapi-volume-a8db8878-c940-4f71-a635-da21aec42fe5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059585227s
STEP: Saw pod success
May 14 11:53:40.720: INFO: Pod "downwardapi-volume-a8db8878-c940-4f71-a635-da21aec42fe5" satisfied condition "Succeeded or Failed"
May 14 11:53:40.723: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-a8db8878-c940-4f71-a635-da21aec42fe5 container client-container: 
STEP: delete the pod
May 14 11:53:40.757: INFO: Waiting for pod downwardapi-volume-a8db8878-c940-4f71-a635-da21aec42fe5 to disappear
May 14 11:53:40.795: INFO: Pod downwardapi-volume-a8db8878-c940-4f71-a635-da21aec42fe5 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:53:40.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4176" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":181,"skipped":3085,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:53:40.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:54:20.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-685" for this suite.

• [SLOW TEST:39.944 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":182,"skipped":3107,"failed":0}
SSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:54:20.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:54:32.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9812" for this suite.

• [SLOW TEST:11.724 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":183,"skipped":3111,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:54:32.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating cluster-info
May 14 11:54:32.936: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config cluster-info'
May 14 11:54:33.118: INFO: stderr: ""
May 14 11:54:33.118: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32772\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32772/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:54:33.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6309" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":275,"completed":184,"skipped":3163,"failed":0}
SS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:54:33.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 14 11:54:33.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:54:37.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2606" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":185,"skipped":3165,"failed":0}
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:54:37.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-configmap-mmq8
STEP: Creating a pod to test atomic-volume-subpath
May 14 11:54:37.631: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-mmq8" in namespace "subpath-4650" to be "Succeeded or Failed"
May 14 11:54:37.675: INFO: Pod "pod-subpath-test-configmap-mmq8": Phase="Pending", Reason="", readiness=false. Elapsed: 43.970023ms
May 14 11:54:39.680: INFO: Pod "pod-subpath-test-configmap-mmq8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048157805s
May 14 11:54:41.684: INFO: Pod "pod-subpath-test-configmap-mmq8": Phase="Running", Reason="", readiness=true. Elapsed: 4.05302921s
May 14 11:54:43.689: INFO: Pod "pod-subpath-test-configmap-mmq8": Phase="Running", Reason="", readiness=true. Elapsed: 6.057798447s
May 14 11:54:45.693: INFO: Pod "pod-subpath-test-configmap-mmq8": Phase="Running", Reason="", readiness=true. Elapsed: 8.061537389s
May 14 11:54:47.697: INFO: Pod "pod-subpath-test-configmap-mmq8": Phase="Running", Reason="", readiness=true. Elapsed: 10.065736645s
May 14 11:54:49.700: INFO: Pod "pod-subpath-test-configmap-mmq8": Phase="Running", Reason="", readiness=true. Elapsed: 12.068937635s
May 14 11:54:51.705: INFO: Pod "pod-subpath-test-configmap-mmq8": Phase="Running", Reason="", readiness=true. Elapsed: 14.073412882s
May 14 11:54:53.708: INFO: Pod "pod-subpath-test-configmap-mmq8": Phase="Running", Reason="", readiness=true. Elapsed: 16.076984409s
May 14 11:54:55.712: INFO: Pod "pod-subpath-test-configmap-mmq8": Phase="Running", Reason="", readiness=true. Elapsed: 18.0807452s
May 14 11:54:57.716: INFO: Pod "pod-subpath-test-configmap-mmq8": Phase="Running", Reason="", readiness=true. Elapsed: 20.084356054s
May 14 11:54:59.887: INFO: Pod "pod-subpath-test-configmap-mmq8": Phase="Running", Reason="", readiness=true. Elapsed: 22.255102684s
May 14 11:55:01.891: INFO: Pod "pod-subpath-test-configmap-mmq8": Phase="Running", Reason="", readiness=true. Elapsed: 24.259119011s
May 14 11:55:03.895: INFO: Pod "pod-subpath-test-configmap-mmq8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.263314929s
STEP: Saw pod success
May 14 11:55:03.895: INFO: Pod "pod-subpath-test-configmap-mmq8" satisfied condition "Succeeded or Failed"
May 14 11:55:03.898: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-configmap-mmq8 container test-container-subpath-configmap-mmq8: 
STEP: delete the pod
May 14 11:55:03.943: INFO: Waiting for pod pod-subpath-test-configmap-mmq8 to disappear
May 14 11:55:03.958: INFO: Pod pod-subpath-test-configmap-mmq8 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-mmq8
May 14 11:55:03.958: INFO: Deleting pod "pod-subpath-test-configmap-mmq8" in namespace "subpath-4650"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:55:03.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4650" for this suite.

• [SLOW TEST:26.404 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":186,"skipped":3167,"failed":0}
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:55:03.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:55:04.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3483" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":187,"skipped":3168,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:55:04.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating api versions
May 14 11:55:04.283: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config api-versions'
May 14 11:55:04.498: INFO: stderr: ""
May 14 11:55:04.498: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:55:04.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4473" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":275,"completed":188,"skipped":3180,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:55:04.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 14 11:55:04.565: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a172b2cd-aab2-45b7-84d1-6146f8e68d58" in namespace "projected-4987" to be "Succeeded or Failed"
May 14 11:55:04.579: INFO: Pod "downwardapi-volume-a172b2cd-aab2-45b7-84d1-6146f8e68d58": Phase="Pending", Reason="", readiness=false. Elapsed: 13.947913ms
May 14 11:55:06.606: INFO: Pod "downwardapi-volume-a172b2cd-aab2-45b7-84d1-6146f8e68d58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04072399s
May 14 11:55:08.791: INFO: Pod "downwardapi-volume-a172b2cd-aab2-45b7-84d1-6146f8e68d58": Phase="Pending", Reason="", readiness=false. Elapsed: 4.226307026s
May 14 11:55:10.795: INFO: Pod "downwardapi-volume-a172b2cd-aab2-45b7-84d1-6146f8e68d58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.230091085s
STEP: Saw pod success
May 14 11:55:10.795: INFO: Pod "downwardapi-volume-a172b2cd-aab2-45b7-84d1-6146f8e68d58" satisfied condition "Succeeded or Failed"
May 14 11:55:10.798: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-a172b2cd-aab2-45b7-84d1-6146f8e68d58 container client-container: 
STEP: delete the pod
May 14 11:55:10.966: INFO: Waiting for pod downwardapi-volume-a172b2cd-aab2-45b7-84d1-6146f8e68d58 to disappear
May 14 11:55:10.992: INFO: Pod downwardapi-volume-a172b2cd-aab2-45b7-84d1-6146f8e68d58 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:55:10.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4987" for this suite.

• [SLOW TEST:6.544 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":189,"skipped":3219,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:55:11.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service multi-endpoint-test in namespace services-5596
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5596 to expose endpoints map[]
May 14 11:55:11.524: INFO: Get endpoints failed (2.629398ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
May 14 11:55:12.527: INFO: successfully validated that service multi-endpoint-test in namespace services-5596 exposes endpoints map[] (1.005720761s elapsed)
STEP: Creating pod pod1 in namespace services-5596
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5596 to expose endpoints map[pod1:[100]]
May 14 11:55:16.749: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.21595977s elapsed, will retry)
May 14 11:55:17.757: INFO: successfully validated that service multi-endpoint-test in namespace services-5596 exposes endpoints map[pod1:[100]] (5.224517356s elapsed)
STEP: Creating pod pod2 in namespace services-5596
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5596 to expose endpoints map[pod1:[100] pod2:[101]]
May 14 11:55:21.996: INFO: successfully validated that service multi-endpoint-test in namespace services-5596 exposes endpoints map[pod1:[100] pod2:[101]] (4.234178028s elapsed)
STEP: Deleting pod pod1 in namespace services-5596
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5596 to expose endpoints map[pod2:[101]]
May 14 11:55:23.043: INFO: successfully validated that service multi-endpoint-test in namespace services-5596 exposes endpoints map[pod2:[101]] (1.043772989s elapsed)
STEP: Deleting pod pod2 in namespace services-5596
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5596 to expose endpoints map[]
May 14 11:55:24.076: INFO: successfully validated that service multi-endpoint-test in namespace services-5596 exposes endpoints map[] (1.029413081s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:55:24.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5596" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:13.287 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":275,"completed":190,"skipped":3265,"failed":0}
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:55:24.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-7b8cf18b-f640-4440-9568-bc5f52c1cc00
STEP: Creating a pod to test consume configMaps
May 14 11:55:24.501: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-faf061b7-dbca-42d0-9bc5-a1811995ac0b" in namespace "projected-2207" to be "Succeeded or Failed"
May 14 11:55:24.507: INFO: Pod "pod-projected-configmaps-faf061b7-dbca-42d0-9bc5-a1811995ac0b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.61106ms
May 14 11:55:26.599: INFO: Pod "pod-projected-configmaps-faf061b7-dbca-42d0-9bc5-a1811995ac0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097840709s
May 14 11:55:28.603: INFO: Pod "pod-projected-configmaps-faf061b7-dbca-42d0-9bc5-a1811995ac0b": Phase="Running", Reason="", readiness=true. Elapsed: 4.101503084s
May 14 11:55:30.607: INFO: Pod "pod-projected-configmaps-faf061b7-dbca-42d0-9bc5-a1811995ac0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.10603706s
STEP: Saw pod success
May 14 11:55:30.607: INFO: Pod "pod-projected-configmaps-faf061b7-dbca-42d0-9bc5-a1811995ac0b" satisfied condition "Succeeded or Failed"
May 14 11:55:30.610: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-faf061b7-dbca-42d0-9bc5-a1811995ac0b container projected-configmap-volume-test: 
STEP: delete the pod
May 14 11:55:31.602: INFO: Waiting for pod pod-projected-configmaps-faf061b7-dbca-42d0-9bc5-a1811995ac0b to disappear
May 14 11:55:31.696: INFO: Pod pod-projected-configmaps-faf061b7-dbca-42d0-9bc5-a1811995ac0b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:55:31.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2207" for this suite.

• [SLOW TEST:7.683 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":191,"skipped":3265,"failed":0}
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:55:32.024: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
May 14 11:55:32.158: INFO: Waiting up to 5m0s for pod "pod-11bde99b-f084-4893-9cc4-6fab78c837e1" in namespace "emptydir-4260" to be "Succeeded or Failed"
May 14 11:55:32.181: INFO: Pod "pod-11bde99b-f084-4893-9cc4-6fab78c837e1": Phase="Pending", Reason="", readiness=false. Elapsed: 22.397869ms
May 14 11:55:34.757: INFO: Pod "pod-11bde99b-f084-4893-9cc4-6fab78c837e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.598047818s
May 14 11:55:36.761: INFO: Pod "pod-11bde99b-f084-4893-9cc4-6fab78c837e1": Phase="Running", Reason="", readiness=true. Elapsed: 4.602964185s
May 14 11:55:38.765: INFO: Pod "pod-11bde99b-f084-4893-9cc4-6fab78c837e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.606798503s
STEP: Saw pod success
May 14 11:55:38.765: INFO: Pod "pod-11bde99b-f084-4893-9cc4-6fab78c837e1" satisfied condition "Succeeded or Failed"
May 14 11:55:38.768: INFO: Trying to get logs from node kali-worker pod pod-11bde99b-f084-4893-9cc4-6fab78c837e1 container test-container: 
STEP: delete the pod
May 14 11:55:38.807: INFO: Waiting for pod pod-11bde99b-f084-4893-9cc4-6fab78c837e1 to disappear
May 14 11:55:38.819: INFO: Pod pod-11bde99b-f084-4893-9cc4-6fab78c837e1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:55:38.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4260" for this suite.

• [SLOW TEST:6.807 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":192,"skipped":3269,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:55:38.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on node default medium
May 14 11:55:38.980: INFO: Waiting up to 5m0s for pod "pod-381794df-1915-4f95-8e25-6d4ca790466a" in namespace "emptydir-5836" to be "Succeeded or Failed"
May 14 11:55:39.005: INFO: Pod "pod-381794df-1915-4f95-8e25-6d4ca790466a": Phase="Pending", Reason="", readiness=false. Elapsed: 24.337789ms
May 14 11:55:41.031: INFO: Pod "pod-381794df-1915-4f95-8e25-6d4ca790466a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050321104s
May 14 11:55:43.065: INFO: Pod "pod-381794df-1915-4f95-8e25-6d4ca790466a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.085173571s
STEP: Saw pod success
May 14 11:55:43.065: INFO: Pod "pod-381794df-1915-4f95-8e25-6d4ca790466a" satisfied condition "Succeeded or Failed"
May 14 11:55:43.069: INFO: Trying to get logs from node kali-worker2 pod pod-381794df-1915-4f95-8e25-6d4ca790466a container test-container: 
STEP: delete the pod
May 14 11:55:43.211: INFO: Waiting for pod pod-381794df-1915-4f95-8e25-6d4ca790466a to disappear
May 14 11:55:43.214: INFO: Pod pod-381794df-1915-4f95-8e25-6d4ca790466a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:55:43.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5836" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":193,"skipped":3329,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:55:43.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap that has name configmap-test-emptyKey-16e8e2e5-9387-4896-a2d8-3a5c9844d1b5
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:55:43.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8337" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":194,"skipped":3393,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:55:43.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 14 11:55:43.393: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
May 14 11:55:46.361: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1684 create -f -'
May 14 11:55:52.064: INFO: stderr: ""
May 14 11:55:52.064: INFO: stdout: "e2e-test-crd-publish-openapi-3369-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
May 14 11:55:52.064: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1684 delete e2e-test-crd-publish-openapi-3369-crds test-foo'
May 14 11:55:52.168: INFO: stderr: ""
May 14 11:55:52.168: INFO: stdout: "e2e-test-crd-publish-openapi-3369-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
May 14 11:55:52.168: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1684 apply -f -'
May 14 11:55:52.421: INFO: stderr: ""
May 14 11:55:52.421: INFO: stdout: "e2e-test-crd-publish-openapi-3369-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
May 14 11:55:52.422: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1684 delete e2e-test-crd-publish-openapi-3369-crds test-foo'
May 14 11:55:52.521: INFO: stderr: ""
May 14 11:55:52.521: INFO: stdout: "e2e-test-crd-publish-openapi-3369-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
May 14 11:55:52.521: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1684 create -f -'
May 14 11:55:52.780: INFO: rc: 1
May 14 11:55:52.781: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1684 apply -f -'
May 14 11:55:53.015: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
May 14 11:55:53.015: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1684 create -f -'
May 14 11:55:53.332: INFO: rc: 1
May 14 11:55:53.332: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1684 apply -f -'
May 14 11:55:53.632: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
May 14 11:55:53.633: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3369-crds'
May 14 11:55:53.869: INFO: stderr: ""
May 14 11:55:53.869: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3369-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
May 14 11:55:53.869: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3369-crds.metadata'
May 14 11:55:54.115: INFO: stderr: ""
May 14 11:55:54.115: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3369-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
May 14 11:55:54.116: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3369-crds.spec'
May 14 11:55:54.334: INFO: stderr: ""
May 14 11:55:54.334: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3369-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
May 14 11:55:54.334: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3369-crds.spec.bars'
May 14 11:55:54.545: INFO: stderr: ""
May 14 11:55:54.545: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3369-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
May 14 11:55:54.545: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3369-crds.spec.bars2'
May 14 11:55:54.776: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:55:56.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1684" for this suite.

• [SLOW TEST:13.357 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":195,"skipped":3399,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:55:56.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
May 14 11:55:56.863: INFO: Created pod &Pod{ObjectMeta:{dns-1965  dns-1965 /api/v1/namespaces/dns-1965/pods/dns-1965 108b473c-df44-41fa-8d5b-4f578e696d2d 4284711 0 2020-05-14 11:55:56 +0000 UTC   map[] map[] [] []  [{e2e.test Update v1 2020-05-14 11:55:56 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 67 111 110 102 105 103 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 115 101 114 118 101 114 115 34 58 123 125 44 34 102 58 115 101 97 114 99 104 101 115 34 58 123 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wskcg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wskcg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wskcg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 14 11:55:56.873: INFO: The status of Pod dns-1965 is Pending, waiting for it to be Running (with Ready = true)
May 14 11:55:58.878: INFO: The status of Pod dns-1965 is Pending, waiting for it to be Running (with Ready = true)
May 14 11:56:00.878: INFO: The status of Pod dns-1965 is Pending, waiting for it to be Running (with Ready = true)
May 14 11:56:02.878: INFO: The status of Pod dns-1965 is Running (Ready = true)
STEP: Verifying customized DNS suffix list is configured on pod...
May 14 11:56:02.878: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-1965 PodName:dns-1965 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 14 11:56:02.878: INFO: >>> kubeConfig: /root/.kube/config
I0514 11:56:02.921333       7 log.go:172] (0xc0039a6000) (0xc0011ecf00) Create stream
I0514 11:56:02.921381       7 log.go:172] (0xc0039a6000) (0xc0011ecf00) Stream added, broadcasting: 1
I0514 11:56:02.923303       7 log.go:172] (0xc0039a6000) Reply frame received for 1
I0514 11:56:02.923339       7 log.go:172] (0xc0039a6000) (0xc0011ed2c0) Create stream
I0514 11:56:02.923352       7 log.go:172] (0xc0039a6000) (0xc0011ed2c0) Stream added, broadcasting: 3
I0514 11:56:02.924356       7 log.go:172] (0xc0039a6000) Reply frame received for 3
I0514 11:56:02.924384       7 log.go:172] (0xc0039a6000) (0xc0011ed4a0) Create stream
I0514 11:56:02.924394       7 log.go:172] (0xc0039a6000) (0xc0011ed4a0) Stream added, broadcasting: 5
I0514 11:56:02.925648       7 log.go:172] (0xc0039a6000) Reply frame received for 5
I0514 11:56:03.029546       7 log.go:172] (0xc0039a6000) Data frame received for 3
I0514 11:56:03.029587       7 log.go:172] (0xc0011ed2c0) (3) Data frame handling
I0514 11:56:03.029613       7 log.go:172] (0xc0011ed2c0) (3) Data frame sent
I0514 11:56:03.030129       7 log.go:172] (0xc0039a6000) Data frame received for 5
I0514 11:56:03.030157       7 log.go:172] (0xc0011ed4a0) (5) Data frame handling
I0514 11:56:03.030465       7 log.go:172] (0xc0039a6000) Data frame received for 3
I0514 11:56:03.030483       7 log.go:172] (0xc0011ed2c0) (3) Data frame handling
I0514 11:56:03.032315       7 log.go:172] (0xc0039a6000) Data frame received for 1
I0514 11:56:03.032376       7 log.go:172] (0xc0011ecf00) (1) Data frame handling
I0514 11:56:03.032434       7 log.go:172] (0xc0011ecf00) (1) Data frame sent
I0514 11:56:03.032512       7 log.go:172] (0xc0039a6000) (0xc0011ecf00) Stream removed, broadcasting: 1
I0514 11:56:03.032541       7 log.go:172] (0xc0039a6000) Go away received
I0514 11:56:03.032645       7 log.go:172] (0xc0039a6000) (0xc0011ecf00) Stream removed, broadcasting: 1
I0514 11:56:03.032670       7 log.go:172] (0xc0039a6000) (0xc0011ed2c0) Stream removed, broadcasting: 3
I0514 11:56:03.032700       7 log.go:172] (0xc0039a6000) (0xc0011ed4a0) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
May 14 11:56:03.032: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-1965 PodName:dns-1965 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 14 11:56:03.032: INFO: >>> kubeConfig: /root/.kube/config
I0514 11:56:03.104017       7 log.go:172] (0xc0039a6630) (0xc0011edc20) Create stream
I0514 11:56:03.104055       7 log.go:172] (0xc0039a6630) (0xc0011edc20) Stream added, broadcasting: 1
I0514 11:56:03.111543       7 log.go:172] (0xc0039a6630) Reply frame received for 1
I0514 11:56:03.111601       7 log.go:172] (0xc0039a6630) (0xc002aaa1e0) Create stream
I0514 11:56:03.111616       7 log.go:172] (0xc0039a6630) (0xc002aaa1e0) Stream added, broadcasting: 3
I0514 11:56:03.113725       7 log.go:172] (0xc0039a6630) Reply frame received for 3
I0514 11:56:03.113768       7 log.go:172] (0xc0039a6630) (0xc0014e9d60) Create stream
I0514 11:56:03.113787       7 log.go:172] (0xc0039a6630) (0xc0014e9d60) Stream added, broadcasting: 5
I0514 11:56:03.115629       7 log.go:172] (0xc0039a6630) Reply frame received for 5
I0514 11:56:03.207393       7 log.go:172] (0xc0039a6630) Data frame received for 3
I0514 11:56:03.207429       7 log.go:172] (0xc002aaa1e0) (3) Data frame handling
I0514 11:56:03.207456       7 log.go:172] (0xc002aaa1e0) (3) Data frame sent
I0514 11:56:03.208420       7 log.go:172] (0xc0039a6630) Data frame received for 5
I0514 11:56:03.208444       7 log.go:172] (0xc0014e9d60) (5) Data frame handling
I0514 11:56:03.208524       7 log.go:172] (0xc0039a6630) Data frame received for 3
I0514 11:56:03.208535       7 log.go:172] (0xc002aaa1e0) (3) Data frame handling
I0514 11:56:03.210690       7 log.go:172] (0xc0039a6630) Data frame received for 1
I0514 11:56:03.210722       7 log.go:172] (0xc0011edc20) (1) Data frame handling
I0514 11:56:03.210741       7 log.go:172] (0xc0011edc20) (1) Data frame sent
I0514 11:56:03.210759       7 log.go:172] (0xc0039a6630) (0xc0011edc20) Stream removed, broadcasting: 1
I0514 11:56:03.210865       7 log.go:172] (0xc0039a6630) (0xc0011edc20) Stream removed, broadcasting: 1
I0514 11:56:03.210885       7 log.go:172] (0xc0039a6630) (0xc002aaa1e0) Stream removed, broadcasting: 3
I0514 11:56:03.211079       7 log.go:172] (0xc0039a6630) (0xc0014e9d60) Stream removed, broadcasting: 5
May 14 11:56:03.211: INFO: Deleting pod dns-1965...
I0514 11:56:03.211727       7 log.go:172] (0xc0039a6630) Go away received
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:56:03.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1965" for this suite.

• [SLOW TEST:6.600 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":196,"skipped":3410,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:56:03.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-548cfb21-2ea2-493e-9ade-c57414977f4f
STEP: Creating a pod to test consume secrets
May 14 11:56:03.758: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9808a89a-cda6-4904-956e-17f830f2767c" in namespace "projected-3059" to be "Succeeded or Failed"
May 14 11:56:03.802: INFO: Pod "pod-projected-secrets-9808a89a-cda6-4904-956e-17f830f2767c": Phase="Pending", Reason="", readiness=false. Elapsed: 44.804537ms
May 14 11:56:05.807: INFO: Pod "pod-projected-secrets-9808a89a-cda6-4904-956e-17f830f2767c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049163563s
May 14 11:56:07.811: INFO: Pod "pod-projected-secrets-9808a89a-cda6-4904-956e-17f830f2767c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053178726s
STEP: Saw pod success
May 14 11:56:07.811: INFO: Pod "pod-projected-secrets-9808a89a-cda6-4904-956e-17f830f2767c" satisfied condition "Succeeded or Failed"
May 14 11:56:07.814: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-9808a89a-cda6-4904-956e-17f830f2767c container projected-secret-volume-test: 
STEP: delete the pod
May 14 11:56:07.927: INFO: Waiting for pod pod-projected-secrets-9808a89a-cda6-4904-956e-17f830f2767c to disappear
May 14 11:56:07.966: INFO: Pod pod-projected-secrets-9808a89a-cda6-4904-956e-17f830f2767c no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:56:07.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3059" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":197,"skipped":3422,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:56:07.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
May 14 11:56:08.238: INFO: Waiting up to 5m0s for pod "downward-api-441edf4e-deb8-4fdc-8078-0279f4311ee2" in namespace "downward-api-3070" to be "Succeeded or Failed"
May 14 11:56:08.260: INFO: Pod "downward-api-441edf4e-deb8-4fdc-8078-0279f4311ee2": Phase="Pending", Reason="", readiness=false. Elapsed: 21.253515ms
May 14 11:56:10.264: INFO: Pod "downward-api-441edf4e-deb8-4fdc-8078-0279f4311ee2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025870081s
May 14 11:56:12.269: INFO: Pod "downward-api-441edf4e-deb8-4fdc-8078-0279f4311ee2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030871649s
STEP: Saw pod success
May 14 11:56:12.269: INFO: Pod "downward-api-441edf4e-deb8-4fdc-8078-0279f4311ee2" satisfied condition "Succeeded or Failed"
May 14 11:56:12.272: INFO: Trying to get logs from node kali-worker pod downward-api-441edf4e-deb8-4fdc-8078-0279f4311ee2 container dapi-container: 
STEP: delete the pod
May 14 11:56:12.353: INFO: Waiting for pod downward-api-441edf4e-deb8-4fdc-8078-0279f4311ee2 to disappear
May 14 11:56:12.438: INFO: Pod downward-api-441edf4e-deb8-4fdc-8078-0279f4311ee2 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:56:12.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3070" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":198,"skipped":3442,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:56:12.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:56:16.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-22" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":199,"skipped":3450,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:56:16.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-ca06ab5b-caca-4afd-b2c7-c096b4e9cae0 in namespace container-probe-3433
May 14 11:56:20.751: INFO: Started pod liveness-ca06ab5b-caca-4afd-b2c7-c096b4e9cae0 in namespace container-probe-3433
STEP: checking the pod's current state and verifying that restartCount is present
May 14 11:56:20.754: INFO: Initial restart count of pod liveness-ca06ab5b-caca-4afd-b2c7-c096b4e9cae0 is 0
May 14 11:56:36.787: INFO: Restart count of pod container-probe-3433/liveness-ca06ab5b-caca-4afd-b2c7-c096b4e9cae0 is now 1 (16.033734236s elapsed)
May 14 11:56:56.922: INFO: Restart count of pod container-probe-3433/liveness-ca06ab5b-caca-4afd-b2c7-c096b4e9cae0 is now 2 (36.16815779s elapsed)
May 14 11:57:17.749: INFO: Restart count of pod container-probe-3433/liveness-ca06ab5b-caca-4afd-b2c7-c096b4e9cae0 is now 3 (56.995314476s elapsed)
May 14 11:57:35.786: INFO: Restart count of pod container-probe-3433/liveness-ca06ab5b-caca-4afd-b2c7-c096b4e9cae0 is now 4 (1m15.032159829s elapsed)
May 14 11:58:36.032: INFO: Restart count of pod container-probe-3433/liveness-ca06ab5b-caca-4afd-b2c7-c096b4e9cae0 is now 5 (2m15.278053076s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:58:36.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3433" for this suite.

• [SLOW TEST:139.451 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":200,"skipped":3461,"failed":0}
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:58:36.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-f06096bb-e65b-43fa-a4d7-520d21f2c25f
STEP: Creating a pod to test consume configMaps
May 14 11:58:36.172: INFO: Waiting up to 5m0s for pod "pod-configmaps-da02e5d7-39a7-42de-9a29-3b42cbb96e8f" in namespace "configmap-110" to be "Succeeded or Failed"
May 14 11:58:36.745: INFO: Pod "pod-configmaps-da02e5d7-39a7-42de-9a29-3b42cbb96e8f": Phase="Pending", Reason="", readiness=false. Elapsed: 573.032455ms
May 14 11:58:38.750: INFO: Pod "pod-configmaps-da02e5d7-39a7-42de-9a29-3b42cbb96e8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.577590825s
May 14 11:58:40.754: INFO: Pod "pod-configmaps-da02e5d7-39a7-42de-9a29-3b42cbb96e8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.581536938s
STEP: Saw pod success
May 14 11:58:40.754: INFO: Pod "pod-configmaps-da02e5d7-39a7-42de-9a29-3b42cbb96e8f" satisfied condition "Succeeded or Failed"
May 14 11:58:40.757: INFO: Trying to get logs from node kali-worker pod pod-configmaps-da02e5d7-39a7-42de-9a29-3b42cbb96e8f container configmap-volume-test: 
STEP: delete the pod
May 14 11:58:40.917: INFO: Waiting for pod pod-configmaps-da02e5d7-39a7-42de-9a29-3b42cbb96e8f to disappear
May 14 11:58:40.958: INFO: Pod pod-configmaps-da02e5d7-39a7-42de-9a29-3b42cbb96e8f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:58:40.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-110" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":201,"skipped":3461,"failed":0}
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:58:40.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-map-c6e85485-2b12-41ed-86d7-26d000c44c27
STEP: Creating a pod to test consume secrets
May 14 11:58:41.297: INFO: Waiting up to 5m0s for pod "pod-secrets-e64bc2f2-5e8c-40d6-be41-5079c8e95a9e" in namespace "secrets-5913" to be "Succeeded or Failed"
May 14 11:58:41.385: INFO: Pod "pod-secrets-e64bc2f2-5e8c-40d6-be41-5079c8e95a9e": Phase="Pending", Reason="", readiness=false. Elapsed: 87.885986ms
May 14 11:58:43.389: INFO: Pod "pod-secrets-e64bc2f2-5e8c-40d6-be41-5079c8e95a9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091448068s
May 14 11:58:45.394: INFO: Pod "pod-secrets-e64bc2f2-5e8c-40d6-be41-5079c8e95a9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096077269s
STEP: Saw pod success
May 14 11:58:45.394: INFO: Pod "pod-secrets-e64bc2f2-5e8c-40d6-be41-5079c8e95a9e" satisfied condition "Succeeded or Failed"
May 14 11:58:45.397: INFO: Trying to get logs from node kali-worker pod pod-secrets-e64bc2f2-5e8c-40d6-be41-5079c8e95a9e container secret-volume-test: 
STEP: delete the pod
May 14 11:58:45.500: INFO: Waiting for pod pod-secrets-e64bc2f2-5e8c-40d6-be41-5079c8e95a9e to disappear
May 14 11:58:45.503: INFO: Pod pod-secrets-e64bc2f2-5e8c-40d6-be41-5079c8e95a9e no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:58:45.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5913" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":202,"skipped":3467,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:58:45.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: getting the auto-created API token
May 14 11:58:46.370: INFO: created pod pod-service-account-defaultsa
May 14 11:58:46.370: INFO: pod pod-service-account-defaultsa service account token volume mount: true
May 14 11:58:46.380: INFO: created pod pod-service-account-mountsa
May 14 11:58:46.381: INFO: pod pod-service-account-mountsa service account token volume mount: true
May 14 11:58:46.408: INFO: created pod pod-service-account-nomountsa
May 14 11:58:46.408: INFO: pod pod-service-account-nomountsa service account token volume mount: false
May 14 11:58:46.555: INFO: created pod pod-service-account-defaultsa-mountspec
May 14 11:58:46.555: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
May 14 11:58:46.776: INFO: created pod pod-service-account-mountsa-mountspec
May 14 11:58:46.776: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
May 14 11:58:47.017: INFO: created pod pod-service-account-nomountsa-mountspec
May 14 11:58:47.018: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
May 14 11:58:47.089: INFO: created pod pod-service-account-defaultsa-nomountspec
May 14 11:58:47.090: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
May 14 11:58:47.302: INFO: created pod pod-service-account-mountsa-nomountspec
May 14 11:58:47.302: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
May 14 11:58:47.446: INFO: created pod pod-service-account-nomountsa-nomountspec
May 14 11:58:47.446: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:58:47.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-4197" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":275,"completed":203,"skipped":3479,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:58:47.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 14 11:58:47.951: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:59:03.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4191" for this suite.

• [SLOW TEST:15.925 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
    listing custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":275,"completed":204,"skipped":3496,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:59:03.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
May 14 11:59:03.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
May 14 11:59:15.382: INFO: >>> kubeConfig: /root/.kube/config
May 14 11:59:18.317: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 11:59:29.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3560" for this suite.

• [SLOW TEST:26.157 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":205,"skipped":3549,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 11:59:29.613: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-08045277-1167-4a67-9e3c-a3d32387138c
STEP: Creating secret with name s-test-opt-upd-b06faff4-edde-4278-9e9c-406442939a96
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-08045277-1167-4a67-9e3c-a3d32387138c
STEP: Updating secret s-test-opt-upd-b06faff4-edde-4278-9e9c-406442939a96
STEP: Creating secret with name s-test-opt-create-2a5ff2d1-c51c-44c0-99c3-d9b084ba8af7
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:01:07.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4622" for this suite.

• [SLOW TEST:98.247 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":206,"skipped":3565,"failed":0}
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:01:07.860: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:01:08.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8841" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":207,"skipped":3565,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:01:08.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 14 12:01:08.503: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:01:10.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7696" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":275,"completed":208,"skipped":3574,"failed":0}
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:01:10.255: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 14 12:01:10.382: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
May 14 12:01:10.395: INFO: Number of nodes with available pods: 0
May 14 12:01:10.395: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
May 14 12:01:10.567: INFO: Number of nodes with available pods: 0
May 14 12:01:10.567: INFO: Node kali-worker is running more than one daemon pod
May 14 12:01:11.571: INFO: Number of nodes with available pods: 0
May 14 12:01:11.571: INFO: Node kali-worker is running more than one daemon pod
May 14 12:01:12.571: INFO: Number of nodes with available pods: 0
May 14 12:01:12.571: INFO: Node kali-worker is running more than one daemon pod
May 14 12:01:13.592: INFO: Number of nodes with available pods: 0
May 14 12:01:13.592: INFO: Node kali-worker is running more than one daemon pod
May 14 12:01:14.787: INFO: Number of nodes with available pods: 0
May 14 12:01:14.787: INFO: Node kali-worker is running more than one daemon pod
May 14 12:01:15.915: INFO: Number of nodes with available pods: 0
May 14 12:01:15.915: INFO: Node kali-worker is running more than one daemon pod
May 14 12:01:16.998: INFO: Number of nodes with available pods: 0
May 14 12:01:16.998: INFO: Node kali-worker is running more than one daemon pod
May 14 12:01:17.670: INFO: Number of nodes with available pods: 1
May 14 12:01:17.670: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
May 14 12:01:18.214: INFO: Number of nodes with available pods: 1
May 14 12:01:18.214: INFO: Number of running nodes: 0, number of available pods: 1
May 14 12:01:19.219: INFO: Number of nodes with available pods: 0
May 14 12:01:19.219: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
May 14 12:01:19.377: INFO: Number of nodes with available pods: 0
May 14 12:01:19.377: INFO: Node kali-worker is running more than one daemon pod
May 14 12:01:20.387: INFO: Number of nodes with available pods: 0
May 14 12:01:20.387: INFO: Node kali-worker is running more than one daemon pod
May 14 12:01:21.380: INFO: Number of nodes with available pods: 0
May 14 12:01:21.380: INFO: Node kali-worker is running more than one daemon pod
May 14 12:01:22.381: INFO: Number of nodes with available pods: 0
May 14 12:01:22.381: INFO: Node kali-worker is running more than one daemon pod
May 14 12:01:23.382: INFO: Number of nodes with available pods: 0
May 14 12:01:23.382: INFO: Node kali-worker is running more than one daemon pod
May 14 12:01:24.381: INFO: Number of nodes with available pods: 0
May 14 12:01:24.381: INFO: Node kali-worker is running more than one daemon pod
May 14 12:01:25.519: INFO: Number of nodes with available pods: 0
May 14 12:01:25.519: INFO: Node kali-worker is running more than one daemon pod
May 14 12:01:26.387: INFO: Number of nodes with available pods: 0
May 14 12:01:26.387: INFO: Node kali-worker is running more than one daemon pod
May 14 12:01:28.089: INFO: Number of nodes with available pods: 0
May 14 12:01:28.089: INFO: Node kali-worker is running more than one daemon pod
May 14 12:01:28.483: INFO: Number of nodes with available pods: 1
May 14 12:01:28.483: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3143, will wait for the garbage collector to delete the pods
May 14 12:01:28.893: INFO: Deleting DaemonSet.extensions daemon-set took: 63.218952ms
May 14 12:01:29.193: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.25548ms
May 14 12:01:43.796: INFO: Number of nodes with available pods: 0
May 14 12:01:43.796: INFO: Number of running nodes: 0, number of available pods: 0
May 14 12:01:43.798: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3143/daemonsets","resourceVersion":"4286190"},"items":null}

May 14 12:01:43.800: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3143/pods","resourceVersion":"4286190"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:01:43.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3143" for this suite.

• [SLOW TEST:33.587 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":209,"skipped":3578,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:01:43.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288
STEP: creating an pod
May 14 12:01:43.911: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-5141 -- logs-generator --log-lines-total 100 --run-duration 20s'
May 14 12:01:44.036: INFO: stderr: ""
May 14 12:01:44.036: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Waiting for log generator to start.
May 14 12:01:44.036: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
May 14 12:01:44.036: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-5141" to be "running and ready, or succeeded"
May 14 12:01:44.047: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 11.018559ms
May 14 12:01:46.050: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014325291s
May 14 12:01:48.480: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.443813737s
May 14 12:01:51.395: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 7.358783921s
May 14 12:01:53.651: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 9.614763908s
May 14 12:01:55.862: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 11.826035071s
May 14 12:01:58.072: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 14.036449089s
May 14 12:02:00.142: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 16.106241728s
May 14 12:02:00.142: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
May 14 12:02:00.142: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
May 14 12:02:00.142: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5141'
May 14 12:02:00.319: INFO: stderr: ""
May 14 12:02:00.319: INFO: stdout: "I0514 12:01:57.182260       1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/d47j 470\nI0514 12:01:57.382389       1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/q4zv 459\nI0514 12:01:57.582429       1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/t77 563\nI0514 12:01:57.782441       1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/ghsv 514\nI0514 12:01:57.982364       1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/kzb 462\nI0514 12:01:58.182431       1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/cqlp 373\nI0514 12:01:58.382483       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/j45 461\nI0514 12:01:58.582424       1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/xj84 507\nI0514 12:01:58.782474       1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/5vxg 385\nI0514 12:01:58.982465       1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/2kkq 576\nI0514 12:01:59.182458       1 logs_generator.go:76] 10 POST /api/v1/namespaces/default/pods/wfx 367\nI0514 12:01:59.382499       1 logs_generator.go:76] 11 PUT /api/v1/namespaces/kube-system/pods/g7tk 231\nI0514 12:01:59.582414       1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/6s2 403\nI0514 12:01:59.782472       1 logs_generator.go:76] 13 POST /api/v1/namespaces/kube-system/pods/tlg 403\nI0514 12:01:59.982415       1 logs_generator.go:76] 14 POST /api/v1/namespaces/kube-system/pods/w8k 217\nI0514 12:02:00.182395       1 logs_generator.go:76] 15 POST /api/v1/namespaces/default/pods/qm9 485\n"
STEP: limiting log lines
May 14 12:02:00.319: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5141 --tail=1'
May 14 12:02:00.606: INFO: stderr: ""
May 14 12:02:00.607: INFO: stdout: "I0514 12:02:00.382412       1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/947 559\nI0514 12:02:00.582472       1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/r5j 539\n"
May 14 12:02:00.607: INFO: got output "I0514 12:02:00.382412       1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/947 559\nI0514 12:02:00.582472       1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/r5j 539\n"
May 14 12:02:00.607: FAIL: Expected
    : 2
to equal
    : 1

Full Stack Trace
k8s.io/kubernetes/test/e2e/kubectl.glob..func1.21.3()
	/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1329 +0x507
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002bf7d00)
	_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:125 +0x324
k8s.io/kubernetes/test/e2e.TestE2E(0xc002bf7d00)
	_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b
testing.tRunner(0xc002bf7d00, 0x4ae8810)
	/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:960 +0x350
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294
May 14 12:02:00.609: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-5141'
May 14 12:02:10.208: INFO: stderr: ""
May 14 12:02:10.208: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
STEP: Collecting events from namespace "kubectl-5141".
STEP: Found 5 events.
May 14 12:02:10.211: INFO: At 2020-05-14 12:01:44 +0000 UTC - event for logs-generator: {default-scheduler } Scheduled: Successfully assigned kubectl-5141/logs-generator to kali-worker2
May 14 12:02:10.211: INFO: At 2020-05-14 12:01:45 +0000 UTC - event for logs-generator: {kubelet kali-worker2} Pulled: Container image "us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12" already present on machine
May 14 12:02:10.211: INFO: At 2020-05-14 12:01:56 +0000 UTC - event for logs-generator: {kubelet kali-worker2} Created: Created container logs-generator
May 14 12:02:10.211: INFO: At 2020-05-14 12:01:57 +0000 UTC - event for logs-generator: {kubelet kali-worker2} Started: Started container logs-generator
May 14 12:02:10.211: INFO: At 2020-05-14 12:02:00 +0000 UTC - event for logs-generator: {kubelet kali-worker2} Killing: Stopping container logs-generator
May 14 12:02:10.213: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
May 14 12:02:10.213: INFO: 
May 14 12:02:10.215: INFO: 
Logging node info for node kali-control-plane
May 14 12:02:10.216: INFO: Node Info: &Node{ObjectMeta:{kali-control-plane   /api/v1/nodes/kali-control-plane 84a583c8-90fb-49f1-81ac-1fbe141d1a1c 4285155 0 2020-04-29 09:30:59 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kali-control-plane kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2020-04-29 09:31:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 117 98 101 97 100 109 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 114 105 45 115 111 99 107 101 116 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 102 58 110 111 100 101 45 114 111 108 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 115 116 101 114 34 58 123 125 125 125 125],}} {kube-controller-manager Update v1 2020-04-29 09:31:39 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 111 100 101 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 116 116 108 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 100 67 73 68 82 34 58 123 125 44 34 102 58 112 111 100 67 73 68 82 115 34 58 123 34 46 34 58 123 125 44 34 118 58 92 34 49 48 46 50 52 52 46 48 46 48 47 50 52 92 34 34 58 123 125 125 44 34 102 58 116 97 105 110 116 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-14 11:57:45 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 104 111 115 116 110 97 109 101 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 100 100 114 101 115 115 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 72 111 115 116 110 97 109 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 116 101 114 110 97 108 73 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 97 108 108 111 99 97 116 97 98 108 101 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 68 105 115 107 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 77 101 109 111 114 121 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 73 68 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 100 97 101 109 111 110 69 110 100 112 111 105 110 116 115 34 58 123 34 102 58 107 117 98 101 108 101 116 69 110 100 112 111 105 110 116 34 58 123 34 102 58 80 111 114 116 34 58 123 125 125 125 44 34 102 58 105 109 97 103 101 115 34 58 123 125 44 34 102 58 110 111 100 101 73 110 102 111 34 58 123 34 102 58 97 114 99 104 105 116 101 99 116 117 114 101 34 58 123 125 44 34 102 58 98 111 111 116 73 68 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 82 117 110 116 105 109 101 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 101 114 110 101 108 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 80 114 111 120 121 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 108 101 116 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 109 97 99 104 105 110 101 73 68 34 58 123 125 44 34 102 58 111 112 101 114 97 116 105 110 103 83 121 115 116 101 109 34 58 123 125 44 34 102 58 111 115 73 109 97 103 101 34 58 123 125 44 34 102 58 115 121 115 116 101 109 85 85 73 68 34 58 123 125 125 125 125],}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-05-14 11:57:45 +0000 UTC,LastTransitionTime:2020-04-29 09:30:56 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-05-14 11:57:45 +0000 UTC,LastTransitionTime:2020-04-29 09:30:56 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-05-14 11:57:45 +0000 UTC,LastTransitionTime:2020-04-29 09:30:56 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-05-14 11:57:45 +0000 UTC,LastTransitionTime:2020-04-29 09:31:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.19,},NodeAddress{Type:Hostname,Address:kali-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2146cf85bed648199604ab2e0e9ac609,SystemUUID:e83c0db4-babe-44fc-9dad-b5eeae6d23fd,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.18.2,KubeProxyVersion:v1.18.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.2],SizeBytes:146648881,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.2],SizeBytes:132860030,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.2],SizeBytes:132826433,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.2],SizeBytes:113095985,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 14 12:02:10.217: INFO: 
Logging kubelet events for node kali-control-plane
May 14 12:02:10.218: INFO: 
Logging pods the kubelet thinks is on node kali-control-plane
May 14 12:02:10.238: INFO: etcd-kali-control-plane started at 2020-04-29 09:31:04 +0000 UTC (0+1 container statuses recorded)
May 14 12:02:10.238: INFO: 	Container etcd ready: true, restart count 0
May 14 12:02:10.238: INFO: coredns-66bff467f8-rvq2k started at 2020-04-29 09:31:37 +0000 UTC (0+1 container statuses recorded)
May 14 12:02:10.238: INFO: 	Container coredns ready: true, restart count 0
May 14 12:02:10.238: INFO: kindnet-65djz started at 2020-04-29 09:31:19 +0000 UTC (0+1 container statuses recorded)
May 14 12:02:10.238: INFO: 	Container kindnet-cni ready: true, restart count 0
May 14 12:02:10.238: INFO: coredns-66bff467f8-w6zxd started at 2020-04-29 09:31:37 +0000 UTC (0+1 container statuses recorded)
May 14 12:02:10.238: INFO: 	Container coredns ready: true, restart count 0
May 14 12:02:10.238: INFO: local-path-provisioner-bd4bb6b75-6l9ph started at 2020-04-29 09:31:37 +0000 UTC (0+1 container statuses recorded)
May 14 12:02:10.238: INFO: 	Container local-path-provisioner ready: true, restart count 0
May 14 12:02:10.238: INFO: kube-apiserver-kali-control-plane started at 2020-04-29 09:31:04 +0000 UTC (0+1 container statuses recorded)
May 14 12:02:10.238: INFO: 	Container kube-apiserver ready: true, restart count 0
May 14 12:02:10.238: INFO: kube-controller-manager-kali-control-plane started at 2020-04-29 09:31:04 +0000 UTC (0+1 container statuses recorded)
May 14 12:02:10.238: INFO: 	Container kube-controller-manager ready: true, restart count 1
May 14 12:02:10.238: INFO: kube-scheduler-kali-control-plane started at 2020-04-29 09:31:04 +0000 UTC (0+1 container statuses recorded)
May 14 12:02:10.238: INFO: 	Container kube-scheduler ready: true, restart count 0
May 14 12:02:10.238: INFO: kube-proxy-pnhtq started at 2020-04-29 09:31:19 +0000 UTC (0+1 container statuses recorded)
May 14 12:02:10.238: INFO: 	Container kube-proxy ready: true, restart count 0
W0514 12:02:10.242005       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May 14 12:02:10.307: INFO: 
Latency metrics for node kali-control-plane
May 14 12:02:10.307: INFO: 
Logging node info for node kali-worker
May 14 12:02:10.310: INFO: Node Info: &Node{ObjectMeta:{kali-worker   /api/v1/nodes/kali-worker d9882acc-073c-45e9-9299-9096bf571d2e 4286191 0 2020-04-29 09:31:36 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kali-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2020-04-29 09:31:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 117 98 101 97 100 109 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 114 105 45 115 111 99 107 101 116 34 58 123 125 125 125 125],}} {kube-controller-manager Update v1 2020-04-29 09:32:06 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 111 100 101 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 116 116 108 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 100 67 73 68 82 34 58 123 125 44 34 102 58 112 111 100 67 73 68 82 115 34 58 123 34 46 34 58 123 125 44 34 118 58 92 34 49 48 46 50 52 52 46 50 46 48 47 50 52 92 34 34 58 123 125 125 125 125],}} {kubelet Update v1 2020-05-14 11:57:18 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 104 111 115 116 110 97 109 101 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 100 100 114 101 115 115 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 72 111 115 116 110 97 109 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 116 101 114 110 97 108 73 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 97 108 108 111 99 97 116 97 98 108 101 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 68 105 115 107 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 77 101 109 111 114 121 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 73 68 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 100 97 101 109 111 110 69 110 100 112 111 105 110 116 115 34 58 123 34 102 58 107 117 98 101 108 101 116 69 110 100 112 111 105 110 116 34 58 123 34 102 58 80 111 114 116 34 58 123 125 125 125 44 34 102 58 105 109 97 103 101 115 34 58 123 125 44 34 102 58 110 111 100 101 73 110 102 111 34 58 123 34 102 58 97 114 99 104 105 116 101 99 116 117 114 101 34 58 123 125 44 34 102 58 98 111 111 116 73 68 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 82 117 110 116 105 109 101 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 101 114 110 101 108 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 80 114 111 120 121 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 108 101 116 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 109 97 99 104 105 110 101 73 68 34 58 123 125 44 34 102 58 111 112 101 114 97 116 105 110 103 83 121 115 116 101 109 34 58 123 125 44 34 102 58 111 115 73 109 97 103 101 34 58 123 125 44 34 102 58 115 121 115 116 101 109 85 85 73 68 34 58 123 125 125 125 125],}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-05-14 11:57:18 +0000 UTC,LastTransitionTime:2020-04-29 09:31:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-05-14 11:57:18 +0000 UTC,LastTransitionTime:2020-04-29 09:31:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-05-14 11:57:18 +0000 UTC,LastTransitionTime:2020-04-29 09:31:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-05-14 11:57:18 +0000 UTC,LastTransitionTime:2020-04-29 09:32:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.15,},NodeAddress{Type:Hostname,Address:kali-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e96e6d32a4f2448f9fda0690bf27c25a,SystemUUID:62c26944-edd7-4df2-a453-f2dbfa247f6d,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.18.2,KubeProxyVersion:v1.18.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:07e93f55decdc1224fb8d161edb5617d58e3488c1250168337548ccc3e82f6b7 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386164043,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:141a336f17eaf068dbe8da4b01a832033aed5c09e7fa6349ec091ee30b76c9b1 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360403156,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:8c84761d2d906e344bc6a85a11451d35696cf684305555611df16ce2615ac816 docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351094667,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:19c6d11d2678c44822f07c01c574fed426e3c99003b6af0410f0911d57939d5a docker.io/ollivier/clearwater-homer:latest],SizeBytes:343984685,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:f365f3b72267bef0fd696e4a93c0f3c19fb65ad42a8850fe22873dbadd03fdba docker.io/ollivier/clearwater-astaire:latest],SizeBytes:326777758,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:eb98596100b1553c9814b6185863ec53e743eb0370faeeafe16fc1dfe8d02ec3 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303283801,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:44590682de48854faeccc1f4c7de39cb666014a0c4e3abd93adcccad3208a6e2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298307172,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:0b3c89ab451b09e347657d5f85ed99d47ec3e8689b98916af72b23576926b08d docker.io/ollivier/clearwater-homestead:latest],SizeBytes:294847386,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:20069a8d9f366dd0f003afa7c4fbcbcd5e9d2b99abae83540c6538fc7cff6b97 docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287124270,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:8ddcfa68c82ebf0b4ce6add019a8f57c024aec453f47a37017cf7dff8680268a docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285184449,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.2],SizeBytes:146648881,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.2],SizeBytes:132860030,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.2],SizeBytes:132826433,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:5a7b70d343cfaeff79f6e6a8f473983a5eb7ca52f723aa8aa226aad4ee5b96e3 docker.io/aquasec/kube-hunter:latest],SizeBytes:125323634,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:795d89480038d62363491066edd962a3f0042c338d4d9feb3f4db23ac659fb40],SizeBytes:124499152,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.2],SizeBytes:113095985,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:c2efaddff058c146b93517d06a3a8066b6e88fecdd98fa6847cb69db22555f04 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:46948523,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9 us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13],SizeBytes:45704260,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12],SizeBytes:45599269,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:1e2b01ec091289327cd7e1b527c11b95db710ace489c9bd665c0d771c0225729],SizeBytes:8039938,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:9d86125c0409a16346857dbda530cf29583c87f186281745f539c12e3dcd38a7 docker.io/aquasec/kube-bench:latest],SizeBytes:8039918,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:ee55386ef35bea93a3a0900fd714038bebd156e0448addf839f38093dbbaace9],SizeBytes:8029111,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:a8cf7ff6367c2afa2a90acd081b484cbded349a7076e7bdf37a05279f276bc12],SizeBytes:764955,},ContainerImage{Names:[docker.io/library/busybox@sha256:836945da1f3afe2cfff376d379852bbb82e0237cb2925d53a13f53d6e8a8c48c docker.io/library/busybox:latest],SizeBytes:764948,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 14 12:02:10.311: INFO: 
Logging kubelet events for node kali-worker
May 14 12:02:10.313: INFO: 
Logging pods the kubelet thinks is on node kali-worker
May 14 12:02:10.332: INFO: kindnet-f8plf started at 2020-04-29 09:31:40 +0000 UTC (0+1 container statuses recorded)
May 14 12:02:10.332: INFO: 	Container kindnet-cni ready: true, restart count 1
May 14 12:02:10.332: INFO: kube-proxy-vrswj started at 2020-04-29 09:31:40 +0000 UTC (0+1 container statuses recorded)
May 14 12:02:10.332: INFO: 	Container kube-proxy ready: true, restart count 0
W0514 12:02:10.338212       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May 14 12:02:10.371: INFO: 
Latency metrics for node kali-worker
May 14 12:02:10.371: INFO: 
Logging node info for node kali-worker2
May 14 12:02:10.374: INFO: Node Info: &Node{ObjectMeta:{kali-worker2   /api/v1/nodes/kali-worker2 6eb4ebcc-ce4f-4a4d-bd7f-5f7e293c044e 4285957 0 2020-04-29 09:31:36 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kali-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2020-04-29 09:31:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 117 98 101 97 100 109 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 114 105 45 115 111 99 107 101 116 34 58 123 125 125 125 125],}} {kube-controller-manager Update v1 2020-04-29 09:32:06 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 111 100 101 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 116 116 108 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 100 67 73 68 82 34 58 123 125 44 34 102 58 112 111 100 67 73 68 82 115 34 58 123 34 46 34 58 123 125 44 34 118 58 92 34 49 48 46 50 52 52 46 49 46 48 47 50 52 92 34 34 58 123 125 125 125 125],}} {kubelet Update v1 2020-05-14 12:00:54 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 104 111 115 116 110 97 109 101 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 100 100 114 101 115 115 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 72 111 115 116 110 97 109 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 116 101 114 110 97 108 73 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 97 108 108 111 99 97 116 97 98 108 101 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 68 105 115 107 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 77 101 109 111 114 121 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 73 68 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 100 97 101 109 111 110 69 110 100 112 111 105 110 116 115 34 58 123 34 102 58 107 117 98 101 108 101 116 69 110 100 112 111 105 110 116 34 58 123 34 102 58 80 111 114 116 34 58 123 125 125 125 44 34 102 58 105 109 97 103 101 115 34 58 123 125 44 34 102 58 110 111 100 101 73 110 102 111 34 58 123 34 102 58 97 114 99 104 105 116 101 99 116 117 114 101 34 58 123 125 44 34 102 58 98 111 111 116 73 68 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 82 117 110 116 105 109 101 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 101 114 110 101 108 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 80 114 111 120 121 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 108 101 116 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 109 97 99 104 105 110 101 73 68 34 58 123 125 44 34 102 58 111 112 101 114 97 116 105 110 103 83 121 115 116 101 109 34 58 123 125 44 34 102 58 111 115 73 109 97 103 101 34 58 123 125 44 34 102 58 115 121 115 116 101 109 85 85 73 68 34 58 123 125 125 125 125],}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-05-14 12:00:54 +0000 UTC,LastTransitionTime:2020-04-29 09:31:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-05-14 12:00:54 +0000 UTC,LastTransitionTime:2020-04-29 09:31:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-05-14 12:00:54 +0000 UTC,LastTransitionTime:2020-04-29 09:31:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-05-14 12:00:54 +0000 UTC,LastTransitionTime:2020-04-29 09:32:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.18,},NodeAddress{Type:Hostname,Address:kali-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e6c808dc84074a009430113a4db25a88,SystemUUID:a7f2e4d4-2bac-4d1a-b10e-f9b7d6d56664,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.18.2,KubeProxyVersion:v1.18.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:07e93f55decdc1224fb8d161edb5617d58e3488c1250168337548ccc3e82f6b7 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386164043,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:141a336f17eaf068dbe8da4b01a832033aed5c09e7fa6349ec091ee30b76c9b1 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360403156,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:8c84761d2d906e344bc6a85a11451d35696cf684305555611df16ce2615ac816 docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351094667,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:19c6d11d2678c44822f07c01c574fed426e3c99003b6af0410f0911d57939d5a docker.io/ollivier/clearwater-homer:latest],SizeBytes:343984685,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:f365f3b72267bef0fd696e4a93c0f3c19fb65ad42a8850fe22873dbadd03fdba docker.io/ollivier/clearwater-astaire:latest],SizeBytes:326777758,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:eb98596100b1553c9814b6185863ec53e743eb0370faeeafe16fc1dfe8d02ec3 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303283801,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:44590682de48854faeccc1f4c7de39cb666014a0c4e3abd93adcccad3208a6e2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298307172,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:0b3c89ab451b09e347657d5f85ed99d47ec3e8689b98916af72b23576926b08d docker.io/ollivier/clearwater-homestead:latest],SizeBytes:294847386,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:20069a8d9f366dd0f003afa7c4fbcbcd5e9d2b99abae83540c6538fc7cff6b97 docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287124270,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:8ddcfa68c82ebf0b4ce6add019a8f57c024aec453f47a37017cf7dff8680268a docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285184449,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.2],SizeBytes:146648881,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.2],SizeBytes:132860030,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.2],SizeBytes:132826433,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:5a7b70d343cfaeff79f6e6a8f473983a5eb7ca52f723aa8aa226aad4ee5b96e3 docker.io/aquasec/kube-hunter:latest],SizeBytes:125323634,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:795d89480038d62363491066edd962a3f0042c338d4d9feb3f4db23ac659fb40],SizeBytes:124499152,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.2],SizeBytes:113095985,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12f377200949c25fde1e54bba639d34d119edd7cfcfb1d117526dba677c03c85 k8s.gcr.io/etcd:3.4.7],SizeBytes:104221097,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:c2efaddff058c146b93517d06a3a8066b6e88fecdd98fa6847cb69db22555f04 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:46948523,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9 us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13],SizeBytes:45704260,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12],SizeBytes:45599269,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:1e2b01ec091289327cd7e1b527c11b95db710ace489c9bd665c0d771c0225729],SizeBytes:8039938,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:9d86125c0409a16346857dbda530cf29583c87f186281745f539c12e3dcd38a7 docker.io/aquasec/kube-bench:latest],SizeBytes:8039918,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:a8cf7ff6367c2afa2a90acd081b484cbded349a7076e7bdf37a05279f276bc12],SizeBytes:764955,},ContainerImage{Names:[docker.io/library/busybox@sha256:836945da1f3afe2cfff376d379852bbb82e0237cb2925d53a13f53d6e8a8c48c docker.io/library/busybox:latest],SizeBytes:764948,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 14 12:02:10.375: INFO: 
Logging kubelet events for node kali-worker2
May 14 12:02:10.376: INFO: 
Logging pods the kubelet thinks is on node kali-worker2
May 14 12:02:10.380: INFO: kube-proxy-mmnb6 started at 2020-04-29 09:31:40 +0000 UTC (0+1 container statuses recorded)
May 14 12:02:10.380: INFO: 	Container kube-proxy ready: true, restart count 0
May 14 12:02:10.380: INFO: kindnet-mcdh2 started at 2020-04-29 09:31:40 +0000 UTC (0+1 container statuses recorded)
May 14 12:02:10.380: INFO: 	Container kindnet-cni ready: true, restart count 0
W0514 12:02:10.382682       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May 14 12:02:10.415: INFO: 
Latency metrics for node kali-worker2
May 14 12:02:10.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5141" for this suite.

• Failure [26.581 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284
    should be able to retrieve and filter logs  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703

    May 14 12:02:00.607: Expected
        : 2
    to equal
        : 1

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1329
------------------------------
{"msg":"FAILED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":275,"completed":209,"skipped":3593,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:02:10.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
May 14 12:02:22.259: INFO: 10 pods remaining
May 14 12:02:22.259: INFO: 10 pods has nil DeletionTimestamp
May 14 12:02:22.259: INFO: 
May 14 12:02:26.274: INFO: 8 pods remaining
May 14 12:02:26.274: INFO: 0 pods has nil DeletionTimestamp
May 14 12:02:26.274: INFO: 
May 14 12:02:29.825: INFO: 0 pods remaining
May 14 12:02:29.825: INFO: 0 pods has nil DeletionTimestamp
May 14 12:02:29.825: INFO: 
May 14 12:02:31.378: INFO: 0 pods remaining
May 14 12:02:31.378: INFO: 0 pods has nil DeletionTimestamp
May 14 12:02:31.378: INFO: 
STEP: Gathering metrics
W0514 12:02:33.313362       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May 14 12:02:33.313: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:02:33.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7410" for this suite.

• [SLOW TEST:23.271 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":210,"skipped":3600,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:02:33.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
May 14 12:02:37.018: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:02:51.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8808" for this suite.

• [SLOW TEST:17.457 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":211,"skipped":3603,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:02:51.151: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
May 14 12:02:51.204: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
May 14 12:02:51.238: INFO: Waiting for terminating namespaces to be deleted...
May 14 12:02:51.241: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
May 14 12:02:51.244: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 14 12:02:51.245: INFO: 	Container kindnet-cni ready: true, restart count 1
May 14 12:02:51.245: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 14 12:02:51.245: INFO: 	Container kube-proxy ready: true, restart count 0
May 14 12:02:51.245: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
May 14 12:02:51.249: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 14 12:02:51.249: INFO: 	Container kindnet-cni ready: true, restart count 0
May 14 12:02:51.249: INFO: pod-init-fb58c915-cfb3-45cc-aac2-b985d06cdf54 from init-container-8808 started at 2020-05-14 12:02:40 +0000 UTC (1 container statuses recorded)
May 14 12:02:51.249: INFO: 	Container run1 ready: false, restart count 0
May 14 12:02:51.249: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 14 12:02:51.249: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.160ee36bf4835ed6], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:02:52.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1111" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":275,"completed":212,"skipped":3610,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:02:52.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:03:03.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4428" for this suite.

• [SLOW TEST:11.284 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":213,"skipped":3627,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:03:03.555: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-d3471e11-2aef-4a40-bb80-f895e6ab76de in namespace container-probe-4205
May 14 12:03:11.658: INFO: Started pod liveness-d3471e11-2aef-4a40-bb80-f895e6ab76de in namespace container-probe-4205
STEP: checking the pod's current state and verifying that restartCount is present
May 14 12:03:11.661: INFO: Initial restart count of pod liveness-d3471e11-2aef-4a40-bb80-f895e6ab76de is 0
May 14 12:03:48.542: INFO: Restart count of pod container-probe-4205/liveness-d3471e11-2aef-4a40-bb80-f895e6ab76de is now 1 (36.881878297s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:03:48.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4205" for this suite.

• [SLOW TEST:45.119 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":214,"skipped":3656,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:03:48.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 14 12:03:48.923: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-0a804f33-36b7-4a26-9814-dc39d0d40d73" in namespace "security-context-test-5656" to be "Succeeded or Failed"
May 14 12:03:49.760: INFO: Pod "busybox-privileged-false-0a804f33-36b7-4a26-9814-dc39d0d40d73": Phase="Pending", Reason="", readiness=false. Elapsed: 836.77477ms
May 14 12:03:51.763: INFO: Pod "busybox-privileged-false-0a804f33-36b7-4a26-9814-dc39d0d40d73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.839863597s
May 14 12:03:54.981: INFO: Pod "busybox-privileged-false-0a804f33-36b7-4a26-9814-dc39d0d40d73": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057211698s
May 14 12:03:57.113: INFO: Pod "busybox-privileged-false-0a804f33-36b7-4a26-9814-dc39d0d40d73": Phase="Pending", Reason="", readiness=false. Elapsed: 8.189403342s
May 14 12:03:59.227: INFO: Pod "busybox-privileged-false-0a804f33-36b7-4a26-9814-dc39d0d40d73": Phase="Pending", Reason="", readiness=false. Elapsed: 10.303822301s
May 14 12:04:01.717: INFO: Pod "busybox-privileged-false-0a804f33-36b7-4a26-9814-dc39d0d40d73": Phase="Pending", Reason="", readiness=false. Elapsed: 12.793583692s
May 14 12:04:03.720: INFO: Pod "busybox-privileged-false-0a804f33-36b7-4a26-9814-dc39d0d40d73": Phase="Pending", Reason="", readiness=false. Elapsed: 14.796372838s
May 14 12:04:06.138: INFO: Pod "busybox-privileged-false-0a804f33-36b7-4a26-9814-dc39d0d40d73": Phase="Pending", Reason="", readiness=false. Elapsed: 17.214826424s
May 14 12:04:08.142: INFO: Pod "busybox-privileged-false-0a804f33-36b7-4a26-9814-dc39d0d40d73": Phase="Pending", Reason="", readiness=false. Elapsed: 19.218653444s
May 14 12:04:10.341: INFO: Pod "busybox-privileged-false-0a804f33-36b7-4a26-9814-dc39d0d40d73": Phase="Pending", Reason="", readiness=false. Elapsed: 21.417475359s
May 14 12:04:12.344: INFO: Pod "busybox-privileged-false-0a804f33-36b7-4a26-9814-dc39d0d40d73": Phase="Pending", Reason="", readiness=false. Elapsed: 23.420310419s
May 14 12:04:14.491: INFO: Pod "busybox-privileged-false-0a804f33-36b7-4a26-9814-dc39d0d40d73": Phase="Pending", Reason="", readiness=false. Elapsed: 25.567997749s
May 14 12:04:16.496: INFO: Pod "busybox-privileged-false-0a804f33-36b7-4a26-9814-dc39d0d40d73": Phase="Pending", Reason="", readiness=false. Elapsed: 27.572233288s
May 14 12:04:18.500: INFO: Pod "busybox-privileged-false-0a804f33-36b7-4a26-9814-dc39d0d40d73": Phase="Pending", Reason="", readiness=false. Elapsed: 29.576331333s
May 14 12:04:20.655: INFO: Pod "busybox-privileged-false-0a804f33-36b7-4a26-9814-dc39d0d40d73": Phase="Pending", Reason="", readiness=false. Elapsed: 31.731207652s
May 14 12:04:22.674: INFO: Pod "busybox-privileged-false-0a804f33-36b7-4a26-9814-dc39d0d40d73": Phase="Pending", Reason="", readiness=false. Elapsed: 33.750866623s
May 14 12:04:24.678: INFO: Pod "busybox-privileged-false-0a804f33-36b7-4a26-9814-dc39d0d40d73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.754685875s
May 14 12:04:24.678: INFO: Pod "busybox-privileged-false-0a804f33-36b7-4a26-9814-dc39d0d40d73" satisfied condition "Succeeded or Failed"
May 14 12:04:24.696: INFO: Got logs for pod "busybox-privileged-false-0a804f33-36b7-4a26-9814-dc39d0d40d73": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:04:24.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5656" for this suite.

• [SLOW TEST:36.258 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  When creating a pod with privileged
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":215,"skipped":3679,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:04:24.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:04:31.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1197" for this suite.

• [SLOW TEST:6.559 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:41
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":216,"skipped":3681,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:04:31.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
May 14 12:04:31.636: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the sample API server.
May 14 12:04:32.299: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
May 14 12:04:35.705: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054672, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054672, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054672, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054672, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 14 12:04:39.504: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054672, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054672, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054672, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054672, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 14 12:04:40.359: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054672, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054672, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054672, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054672, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 14 12:04:41.922: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054672, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054672, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054672, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054672, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 14 12:04:44.802: INFO: Waited 1.086778961s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:04:45.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-7804" for this suite.

• [SLOW TEST:13.856 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":217,"skipped":3699,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:04:45.348: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-1514
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 14 12:04:45.779: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 14 12:04:46.152: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 14 12:04:48.155: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 14 12:04:50.156: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 14 12:04:52.155: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 14 12:04:54.155: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 14 12:04:56.155: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 14 12:04:58.155: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 14 12:05:00.155: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 14 12:05:03.121: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 14 12:05:04.155: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 14 12:05:04.159: INFO: The status of Pod netserver-1 is Running (Ready = false)
May 14 12:05:06.245: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 14 12:05:15.912: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.109:8080/dial?request=hostname&protocol=udp&host=10.244.2.223&port=8081&tries=1'] Namespace:pod-network-test-1514 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 14 12:05:15.912: INFO: >>> kubeConfig: /root/.kube/config
I0514 12:05:15.943550       7 log.go:172] (0xc002e8e370) (0xc000ebcb40) Create stream
I0514 12:05:15.943576       7 log.go:172] (0xc002e8e370) (0xc000ebcb40) Stream added, broadcasting: 1
I0514 12:05:15.944991       7 log.go:172] (0xc002e8e370) Reply frame received for 1
I0514 12:05:15.945033       7 log.go:172] (0xc002e8e370) (0xc001f36b40) Create stream
I0514 12:05:15.945045       7 log.go:172] (0xc002e8e370) (0xc001f36b40) Stream added, broadcasting: 3
I0514 12:05:15.946135       7 log.go:172] (0xc002e8e370) Reply frame received for 3
I0514 12:05:15.946161       7 log.go:172] (0xc002e8e370) (0xc000ebcd20) Create stream
I0514 12:05:15.946172       7 log.go:172] (0xc002e8e370) (0xc000ebcd20) Stream added, broadcasting: 5
I0514 12:05:15.947009       7 log.go:172] (0xc002e8e370) Reply frame received for 5
I0514 12:05:16.042946       7 log.go:172] (0xc002e8e370) Data frame received for 3
I0514 12:05:16.042977       7 log.go:172] (0xc001f36b40) (3) Data frame handling
I0514 12:05:16.042997       7 log.go:172] (0xc001f36b40) (3) Data frame sent
I0514 12:05:16.043258       7 log.go:172] (0xc002e8e370) Data frame received for 3
I0514 12:05:16.043278       7 log.go:172] (0xc001f36b40) (3) Data frame handling
I0514 12:05:16.043384       7 log.go:172] (0xc002e8e370) Data frame received for 5
I0514 12:05:16.043414       7 log.go:172] (0xc000ebcd20) (5) Data frame handling
I0514 12:05:16.044746       7 log.go:172] (0xc002e8e370) Data frame received for 1
I0514 12:05:16.044768       7 log.go:172] (0xc000ebcb40) (1) Data frame handling
I0514 12:05:16.044782       7 log.go:172] (0xc000ebcb40) (1) Data frame sent
I0514 12:05:16.044799       7 log.go:172] (0xc002e8e370) (0xc000ebcb40) Stream removed, broadcasting: 1
I0514 12:05:16.044818       7 log.go:172] (0xc002e8e370) Go away received
I0514 12:05:16.044934       7 log.go:172] (0xc002e8e370) (0xc000ebcb40) Stream removed, broadcasting: 1
I0514 12:05:16.044963       7 log.go:172] (0xc002e8e370) (0xc001f36b40) Stream removed, broadcasting: 3
I0514 12:05:16.044982       7 log.go:172] (0xc002e8e370) (0xc000ebcd20) Stream removed, broadcasting: 5
May 14 12:05:16.045: INFO: Waiting for responses: map[]
May 14 12:05:16.065: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.109:8080/dial?request=hostname&protocol=udp&host=10.244.1.108&port=8081&tries=1'] Namespace:pod-network-test-1514 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 14 12:05:16.065: INFO: >>> kubeConfig: /root/.kube/config
I0514 12:05:16.087026       7 log.go:172] (0xc002da6580) (0xc001221680) Create stream
I0514 12:05:16.087046       7 log.go:172] (0xc002da6580) (0xc001221680) Stream added, broadcasting: 1
I0514 12:05:16.088219       7 log.go:172] (0xc002da6580) Reply frame received for 1
I0514 12:05:16.088243       7 log.go:172] (0xc002da6580) (0xc001f79540) Create stream
I0514 12:05:16.088250       7 log.go:172] (0xc002da6580) (0xc001f79540) Stream added, broadcasting: 3
I0514 12:05:16.088997       7 log.go:172] (0xc002da6580) Reply frame received for 3
I0514 12:05:16.089018       7 log.go:172] (0xc002da6580) (0xc001f795e0) Create stream
I0514 12:05:16.089024       7 log.go:172] (0xc002da6580) (0xc001f795e0) Stream added, broadcasting: 5
I0514 12:05:16.089800       7 log.go:172] (0xc002da6580) Reply frame received for 5
I0514 12:05:16.142568       7 log.go:172] (0xc002da6580) Data frame received for 3
I0514 12:05:16.142612       7 log.go:172] (0xc001f79540) (3) Data frame handling
I0514 12:05:16.142659       7 log.go:172] (0xc001f79540) (3) Data frame sent
I0514 12:05:16.142925       7 log.go:172] (0xc002da6580) Data frame received for 3
I0514 12:05:16.142977       7 log.go:172] (0xc001f79540) (3) Data frame handling
I0514 12:05:16.143025       7 log.go:172] (0xc002da6580) Data frame received for 5
I0514 12:05:16.143054       7 log.go:172] (0xc001f795e0) (5) Data frame handling
I0514 12:05:16.144471       7 log.go:172] (0xc002da6580) Data frame received for 1
I0514 12:05:16.144499       7 log.go:172] (0xc001221680) (1) Data frame handling
I0514 12:05:16.144519       7 log.go:172] (0xc001221680) (1) Data frame sent
I0514 12:05:16.144547       7 log.go:172] (0xc002da6580) (0xc001221680) Stream removed, broadcasting: 1
I0514 12:05:16.144578       7 log.go:172] (0xc002da6580) Go away received
I0514 12:05:16.144826       7 log.go:172] (0xc002da6580) (0xc001221680) Stream removed, broadcasting: 1
I0514 12:05:16.144846       7 log.go:172] (0xc002da6580) (0xc001f79540) Stream removed, broadcasting: 3
I0514 12:05:16.144861       7 log.go:172] (0xc002da6580) (0xc001f795e0) Stream removed, broadcasting: 5
May 14 12:05:16.144: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:05:16.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1514" for this suite.

• [SLOW TEST:30.803 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":218,"skipped":3733,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:05:16.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-8b244c7f-dcdf-4582-b289-5fa3f889ed3d
STEP: Creating a pod to test consume secrets
May 14 12:05:16.234: INFO: Waiting up to 5m0s for pod "pod-secrets-427ccbd3-75da-4e44-8e8d-b64fdc0758ae" in namespace "secrets-9268" to be "Succeeded or Failed"
May 14 12:05:16.249: INFO: Pod "pod-secrets-427ccbd3-75da-4e44-8e8d-b64fdc0758ae": Phase="Pending", Reason="", readiness=false. Elapsed: 15.067033ms
May 14 12:05:18.263: INFO: Pod "pod-secrets-427ccbd3-75da-4e44-8e8d-b64fdc0758ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029231654s
May 14 12:05:20.299: INFO: Pod "pod-secrets-427ccbd3-75da-4e44-8e8d-b64fdc0758ae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065319213s
May 14 12:05:22.744: INFO: Pod "pod-secrets-427ccbd3-75da-4e44-8e8d-b64fdc0758ae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.510183186s
May 14 12:05:24.844: INFO: Pod "pod-secrets-427ccbd3-75da-4e44-8e8d-b64fdc0758ae": Phase="Running", Reason="", readiness=true. Elapsed: 8.610089587s
May 14 12:05:26.974: INFO: Pod "pod-secrets-427ccbd3-75da-4e44-8e8d-b64fdc0758ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.740287476s
STEP: Saw pod success
May 14 12:05:26.974: INFO: Pod "pod-secrets-427ccbd3-75da-4e44-8e8d-b64fdc0758ae" satisfied condition "Succeeded or Failed"
May 14 12:05:26.976: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-427ccbd3-75da-4e44-8e8d-b64fdc0758ae container secret-volume-test: 
STEP: delete the pod
May 14 12:05:27.205: INFO: Waiting for pod pod-secrets-427ccbd3-75da-4e44-8e8d-b64fdc0758ae to disappear
May 14 12:05:27.216: INFO: Pod pod-secrets-427ccbd3-75da-4e44-8e8d-b64fdc0758ae no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:05:27.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9268" for this suite.

• [SLOW TEST:11.079 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":219,"skipped":3740,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:05:27.231: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 14 12:05:28.252: INFO: Waiting up to 5m0s for pod "busybox-user-65534-0704be10-0d3b-4cfb-9cfc-2110b34c1620" in namespace "security-context-test-5948" to be "Succeeded or Failed"
May 14 12:05:28.511: INFO: Pod "busybox-user-65534-0704be10-0d3b-4cfb-9cfc-2110b34c1620": Phase="Pending", Reason="", readiness=false. Elapsed: 259.569321ms
May 14 12:05:31.018: INFO: Pod "busybox-user-65534-0704be10-0d3b-4cfb-9cfc-2110b34c1620": Phase="Pending", Reason="", readiness=false. Elapsed: 2.766671767s
May 14 12:05:33.084: INFO: Pod "busybox-user-65534-0704be10-0d3b-4cfb-9cfc-2110b34c1620": Phase="Pending", Reason="", readiness=false. Elapsed: 4.831928118s
May 14 12:05:35.211: INFO: Pod "busybox-user-65534-0704be10-0d3b-4cfb-9cfc-2110b34c1620": Phase="Pending", Reason="", readiness=false. Elapsed: 6.959869134s
May 14 12:05:37.607: INFO: Pod "busybox-user-65534-0704be10-0d3b-4cfb-9cfc-2110b34c1620": Phase="Pending", Reason="", readiness=false. Elapsed: 9.355370709s
May 14 12:05:40.024: INFO: Pod "busybox-user-65534-0704be10-0d3b-4cfb-9cfc-2110b34c1620": Phase="Pending", Reason="", readiness=false. Elapsed: 11.772178288s
May 14 12:05:42.026: INFO: Pod "busybox-user-65534-0704be10-0d3b-4cfb-9cfc-2110b34c1620": Phase="Pending", Reason="", readiness=false. Elapsed: 13.774896581s
May 14 12:05:44.043: INFO: Pod "busybox-user-65534-0704be10-0d3b-4cfb-9cfc-2110b34c1620": Phase="Pending", Reason="", readiness=false. Elapsed: 15.79110356s
May 14 12:05:46.174: INFO: Pod "busybox-user-65534-0704be10-0d3b-4cfb-9cfc-2110b34c1620": Phase="Pending", Reason="", readiness=false. Elapsed: 17.922059343s
May 14 12:05:48.683: INFO: Pod "busybox-user-65534-0704be10-0d3b-4cfb-9cfc-2110b34c1620": Phase="Running", Reason="", readiness=true. Elapsed: 20.431892029s
May 14 12:05:50.686: INFO: Pod "busybox-user-65534-0704be10-0d3b-4cfb-9cfc-2110b34c1620": Phase="Running", Reason="", readiness=true. Elapsed: 22.434728108s
May 14 12:05:53.881: INFO: Pod "busybox-user-65534-0704be10-0d3b-4cfb-9cfc-2110b34c1620": Phase="Running", Reason="", readiness=true. Elapsed: 25.629853452s
May 14 12:05:55.884: INFO: Pod "busybox-user-65534-0704be10-0d3b-4cfb-9cfc-2110b34c1620": Phase="Running", Reason="", readiness=true. Elapsed: 27.632410222s
May 14 12:05:58.191: INFO: Pod "busybox-user-65534-0704be10-0d3b-4cfb-9cfc-2110b34c1620": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.93990303s
May 14 12:05:58.192: INFO: Pod "busybox-user-65534-0704be10-0d3b-4cfb-9cfc-2110b34c1620" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:05:58.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5948" for this suite.

• [SLOW TEST:31.008 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  When creating a container with runAsUser
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":220,"skipped":3762,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:05:58.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating all guestbook components
May 14 12:05:58.410: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

May 14 12:05:58.410: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-319'
May 14 12:06:03.021: INFO: stderr: ""
May 14 12:06:03.021: INFO: stdout: "service/agnhost-slave created\n"
May 14 12:06:03.021: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

May 14 12:06:03.021: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-319'
May 14 12:06:03.293: INFO: stderr: ""
May 14 12:06:03.293: INFO: stdout: "service/agnhost-master created\n"
May 14 12:06:03.294: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

May 14 12:06:03.294: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-319'
May 14 12:06:03.543: INFO: stderr: ""
May 14 12:06:03.543: INFO: stdout: "service/frontend created\n"
May 14 12:06:03.543: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

May 14 12:06:03.543: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-319'
May 14 12:06:03.772: INFO: stderr: ""
May 14 12:06:03.772: INFO: stdout: "deployment.apps/frontend created\n"
May 14 12:06:03.772: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

May 14 12:06:03.772: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-319'
May 14 12:06:04.403: INFO: stderr: ""
May 14 12:06:04.403: INFO: stdout: "deployment.apps/agnhost-master created\n"
May 14 12:06:04.403: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

May 14 12:06:04.403: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-319'
May 14 12:06:04.722: INFO: stderr: ""
May 14 12:06:04.722: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
May 14 12:06:04.722: INFO: Waiting for all frontend pods to be Running.
May 14 12:06:14.773: INFO: Waiting for frontend to serve content.
May 14 12:06:14.780: INFO: Trying to add a new entry to the guestbook.
May 14 12:06:14.789: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
May 14 12:06:14.796: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-319'
May 14 12:06:15.094: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 14 12:06:15.094: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
May 14 12:06:15.094: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-319'
May 14 12:06:15.265: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 14 12:06:15.265: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
May 14 12:06:15.265: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-319'
May 14 12:06:15.407: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 14 12:06:15.407: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
May 14 12:06:15.407: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-319'
May 14 12:06:15.540: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 14 12:06:15.540: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
May 14 12:06:15.540: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-319'
May 14 12:06:15.646: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 14 12:06:15.646: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
May 14 12:06:15.647: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-319'
May 14 12:06:16.386: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 14 12:06:16.386: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:06:16.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-319" for this suite.

• [SLOW TEST:18.162 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":275,"completed":221,"skipped":3765,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:06:16.402: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 14 12:06:16.894: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
May 14 12:06:16.949: INFO: Pod name sample-pod: Found 0 pods out of 1
May 14 12:06:21.955: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
May 14 12:06:24.007: INFO: Creating deployment "test-rolling-update-deployment"
May 14 12:06:24.011: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
May 14 12:06:24.023: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
May 14 12:06:26.028: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
May 14 12:06:26.030: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054784, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054784, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054784, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054784, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-59d5cb45c7\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 14 12:06:28.055: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054784, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054784, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054784, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054784, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-59d5cb45c7\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 14 12:06:30.034: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
May 14 12:06:30.043: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-3962 /apis/apps/v1/namespaces/deployment-3962/deployments/test-rolling-update-deployment a26bac04-ed6f-4d6f-a10f-19fad025d898 4287649 1 2020-05-14 12:06:24 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  [{e2e.test Update apps/v1 2020-05-14 12:06:24 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-14 12:06:28 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0036b6918  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-14 12:06:24 +0000 UTC,LastTransitionTime:2020-05-14 12:06:24 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-59d5cb45c7" has successfully progressed.,LastUpdateTime:2020-05-14 12:06:28 +0000 UTC,LastTransitionTime:2020-05-14 12:06:24 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

May 14 12:06:30.046: INFO: New ReplicaSet "test-rolling-update-deployment-59d5cb45c7" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7  deployment-3962 /apis/apps/v1/namespaces/deployment-3962/replicasets/test-rolling-update-deployment-59d5cb45c7 9a71567b-41e5-4ebc-bb12-30fe55536436 4287638 1 2020-05-14 12:06:24 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment a26bac04-ed6f-4d6f-a10f-19fad025d898 0xc005434c67 0xc005434c68}] []  [{kube-controller-manager Update apps/v1 2020-05-14 12:06:28 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 50 54 98 97 99 48 52 45 101 100 54 102 45 52 100 54 102 45 97 49 48 102 45 49 57 102 97 100 48 50 53 100 56 57 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 59d5cb45c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005434cf8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
May 14 12:06:30.046: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
May 14 12:06:30.047: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-3962 /apis/apps/v1/namespaces/deployment-3962/replicasets/test-rolling-update-controller 8950c485-c192-4428-a88c-f7883a5b6775 4287647 2 2020-05-14 12:06:16 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment a26bac04-ed6f-4d6f-a10f-19fad025d898 0xc005434b37 0xc005434b38}] []  [{e2e.test Update apps/v1 2020-05-14 12:06:16 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-14 12:06:28 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 50 54 98 97 99 48 52 45 101 100 54 102 45 52 100 54 102 45 97 49 48 102 45 49 57 102 97 100 48 50 53 100 56 57 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005434bf8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May 14 12:06:30.050: INFO: Pod "test-rolling-update-deployment-59d5cb45c7-qvbr7" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7-qvbr7 test-rolling-update-deployment-59d5cb45c7- deployment-3962 /api/v1/namespaces/deployment-3962/pods/test-rolling-update-deployment-59d5cb45c7-qvbr7 2f871401-6c76-41df-bb36-a49c98ab3798 4287637 0 2020-05-14 12:06:24 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-59d5cb45c7 9a71567b-41e5-4ebc-bb12-30fe55536436 0xc0054c86e7 0xc0054c86e8}] []  [{kube-controller-manager Update v1 2020-05-14 12:06:24 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 97 55 49 53 54 55 98 45 52 49 101 53 45 52 101 98 99 45 98 98 49 50 45 51 48 102 101 53 53 53 51 54 52 51 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-14 12:06:28 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 49 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lg9rj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lg9rj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lg9rj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 12:06:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 12:06:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 12:06:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 12:06:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.114,StartTime:2020-05-14 12:06:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-14 12:06:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://e3a3c9c91aa31918864eeccb8f729b1d695e3683b2b61efd71934913b76e5036,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.114,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:06:30.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3962" for this suite.

• [SLOW TEST:13.654 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":222,"skipped":3801,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:06:30.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap configmap-5193/configmap-test-9d63ee02-e7be-4ae1-b520-5220c962ba75
STEP: Creating a pod to test consume configMaps
May 14 12:06:30.143: INFO: Waiting up to 5m0s for pod "pod-configmaps-2d08a3cd-de1c-4110-a0e4-c935871e8640" in namespace "configmap-5193" to be "Succeeded or Failed"
May 14 12:06:30.159: INFO: Pod "pod-configmaps-2d08a3cd-de1c-4110-a0e4-c935871e8640": Phase="Pending", Reason="", readiness=false. Elapsed: 16.587247ms
May 14 12:06:32.162: INFO: Pod "pod-configmaps-2d08a3cd-de1c-4110-a0e4-c935871e8640": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01964354s
May 14 12:06:34.198: INFO: Pod "pod-configmaps-2d08a3cd-de1c-4110-a0e4-c935871e8640": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055283699s
STEP: Saw pod success
May 14 12:06:34.198: INFO: Pod "pod-configmaps-2d08a3cd-de1c-4110-a0e4-c935871e8640" satisfied condition "Succeeded or Failed"
May 14 12:06:34.200: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-2d08a3cd-de1c-4110-a0e4-c935871e8640 container env-test: 
STEP: delete the pod
May 14 12:06:34.240: INFO: Waiting for pod pod-configmaps-2d08a3cd-de1c-4110-a0e4-c935871e8640 to disappear
May 14 12:06:34.255: INFO: Pod pod-configmaps-2d08a3cd-de1c-4110-a0e4-c935871e8640 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:06:34.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5193" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":223,"skipped":3851,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:06:34.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
May 14 12:06:35.281: INFO: Pod name wrapped-volume-race-25ad686c-09de-4637-aa38-11c6e3a6d878: Found 0 pods out of 5
May 14 12:06:40.880: INFO: Pod name wrapped-volume-race-25ad686c-09de-4637-aa38-11c6e3a6d878: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-25ad686c-09de-4637-aa38-11c6e3a6d878 in namespace emptydir-wrapper-1439, will wait for the garbage collector to delete the pods
May 14 12:06:53.023: INFO: Deleting ReplicationController wrapped-volume-race-25ad686c-09de-4637-aa38-11c6e3a6d878 took: 5.545851ms
May 14 12:06:55.024: INFO: Terminating ReplicationController wrapped-volume-race-25ad686c-09de-4637-aa38-11c6e3a6d878 pods took: 2.000283802s
STEP: Creating RC which spawns configmap-volume pods
May 14 12:07:13.486: INFO: Pod name wrapped-volume-race-2ba8d2d9-3229-4f00-8255-5dfcfe12b33c: Found 0 pods out of 5
May 14 12:07:18.493: INFO: Pod name wrapped-volume-race-2ba8d2d9-3229-4f00-8255-5dfcfe12b33c: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-2ba8d2d9-3229-4f00-8255-5dfcfe12b33c in namespace emptydir-wrapper-1439, will wait for the garbage collector to delete the pods
May 14 12:07:56.623: INFO: Deleting ReplicationController wrapped-volume-race-2ba8d2d9-3229-4f00-8255-5dfcfe12b33c took: 6.169204ms
May 14 12:07:57.023: INFO: Terminating ReplicationController wrapped-volume-race-2ba8d2d9-3229-4f00-8255-5dfcfe12b33c pods took: 400.198685ms
STEP: Creating RC which spawns configmap-volume pods
May 14 12:08:34.123: INFO: Pod name wrapped-volume-race-42d5decc-52ce-480f-a7cd-fa8a0e979166: Found 0 pods out of 5
May 14 12:08:39.130: INFO: Pod name wrapped-volume-race-42d5decc-52ce-480f-a7cd-fa8a0e979166: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-42d5decc-52ce-480f-a7cd-fa8a0e979166 in namespace emptydir-wrapper-1439, will wait for the garbage collector to delete the pods
May 14 12:08:59.249: INFO: Deleting ReplicationController wrapped-volume-race-42d5decc-52ce-480f-a7cd-fa8a0e979166 took: 7.500395ms
May 14 12:08:59.649: INFO: Terminating ReplicationController wrapped-volume-race-42d5decc-52ce-480f-a7cd-fa8a0e979166 pods took: 400.248364ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:09:13.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-1439" for this suite.

• [SLOW TEST:159.369 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":224,"skipped":3860,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:09:13.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 14 12:09:13.753: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f0fd7152-d2e4-4137-8f44-3d7a88260fca" in namespace "downward-api-1836" to be "Succeeded or Failed"
May 14 12:09:13.770: INFO: Pod "downwardapi-volume-f0fd7152-d2e4-4137-8f44-3d7a88260fca": Phase="Pending", Reason="", readiness=false. Elapsed: 16.845666ms
May 14 12:09:15.983: INFO: Pod "downwardapi-volume-f0fd7152-d2e4-4137-8f44-3d7a88260fca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.229723195s
May 14 12:09:17.986: INFO: Pod "downwardapi-volume-f0fd7152-d2e4-4137-8f44-3d7a88260fca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.233393147s
May 14 12:09:20.116: INFO: Pod "downwardapi-volume-f0fd7152-d2e4-4137-8f44-3d7a88260fca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.362422129s
May 14 12:09:22.237: INFO: Pod "downwardapi-volume-f0fd7152-d2e4-4137-8f44-3d7a88260fca": Phase="Pending", Reason="", readiness=false. Elapsed: 8.484297149s
May 14 12:09:24.373: INFO: Pod "downwardapi-volume-f0fd7152-d2e4-4137-8f44-3d7a88260fca": Phase="Running", Reason="", readiness=true. Elapsed: 10.620383952s
May 14 12:09:26.432: INFO: Pod "downwardapi-volume-f0fd7152-d2e4-4137-8f44-3d7a88260fca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.679399029s
STEP: Saw pod success
May 14 12:09:26.433: INFO: Pod "downwardapi-volume-f0fd7152-d2e4-4137-8f44-3d7a88260fca" satisfied condition "Succeeded or Failed"
May 14 12:09:26.446: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-f0fd7152-d2e4-4137-8f44-3d7a88260fca container client-container: 
STEP: delete the pod
May 14 12:09:26.588: INFO: Waiting for pod downwardapi-volume-f0fd7152-d2e4-4137-8f44-3d7a88260fca to disappear
May 14 12:09:26.601: INFO: Pod downwardapi-volume-f0fd7152-d2e4-4137-8f44-3d7a88260fca no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:09:26.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1836" for this suite.

• [SLOW TEST:12.990 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":225,"skipped":3866,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:09:26.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 14 12:09:27.863: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 14 12:09:29.889: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 14 12:09:31.948: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 14 12:09:33.918: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 14 12:09:36.008: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 14 12:09:37.892: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 14 12:09:39.985: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 14 12:09:43.695: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 14 12:09:44.230: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 14 12:09:45.891: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054967, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 14 12:09:48.947: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:09:49.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5159" for this suite.
STEP: Destroying namespace "webhook-5159-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:22.572 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":226,"skipped":3890,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:09:49.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 14 12:09:50.894: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 14 12:09:52.904: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054991, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054991, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054992, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054990, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 14 12:09:55.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054991, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054991, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054992, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725054990, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 14 12:09:57.941: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:10:10.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5450" for this suite.
STEP: Destroying namespace "webhook-5450-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:21.160 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":227,"skipped":3901,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:10:10.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:10:10.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-401" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":228,"skipped":3920,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:10:10.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6443.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6443.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6443.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6443.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-6443.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6443.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May 14 12:10:24.278: INFO: DNS probes using dns-6443/dns-test-905526bc-cf93-4d93-a143-3d2a736fa880 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:10:24.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6443" for this suite.

• [SLOW TEST:13.905 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":229,"skipped":3930,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:10:24.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override arguments
May 14 12:10:24.928: INFO: Waiting up to 5m0s for pod "client-containers-f06634e5-5639-48e9-bd32-f0f33344a458" in namespace "containers-8843" to be "Succeeded or Failed"
May 14 12:10:25.087: INFO: Pod "client-containers-f06634e5-5639-48e9-bd32-f0f33344a458": Phase="Pending", Reason="", readiness=false. Elapsed: 158.943824ms
May 14 12:10:27.090: INFO: Pod "client-containers-f06634e5-5639-48e9-bd32-f0f33344a458": Phase="Pending", Reason="", readiness=false. Elapsed: 2.162568708s
May 14 12:10:29.094: INFO: Pod "client-containers-f06634e5-5639-48e9-bd32-f0f33344a458": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166585884s
May 14 12:10:31.123: INFO: Pod "client-containers-f06634e5-5639-48e9-bd32-f0f33344a458": Phase="Pending", Reason="", readiness=false. Elapsed: 6.195148548s
May 14 12:10:33.275: INFO: Pod "client-containers-f06634e5-5639-48e9-bd32-f0f33344a458": Phase="Running", Reason="", readiness=true. Elapsed: 8.34712429s
May 14 12:10:35.279: INFO: Pod "client-containers-f06634e5-5639-48e9-bd32-f0f33344a458": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.351303557s
STEP: Saw pod success
May 14 12:10:35.279: INFO: Pod "client-containers-f06634e5-5639-48e9-bd32-f0f33344a458" satisfied condition "Succeeded or Failed"
May 14 12:10:35.282: INFO: Trying to get logs from node kali-worker2 pod client-containers-f06634e5-5639-48e9-bd32-f0f33344a458 container test-container: 
STEP: delete the pod
May 14 12:10:35.428: INFO: Waiting for pod client-containers-f06634e5-5639-48e9-bd32-f0f33344a458 to disappear
May 14 12:10:35.664: INFO: Pod client-containers-f06634e5-5639-48e9-bd32-f0f33344a458 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:10:35.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8843" for this suite.

• [SLOW TEST:11.944 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":230,"skipped":3966,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:10:36.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
May 14 12:10:36.668: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:10:36.678: INFO: Number of nodes with available pods: 0
May 14 12:10:36.678: INFO: Node kali-worker is running more than one daemon pod
May 14 12:10:37.683: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:10:37.686: INFO: Number of nodes with available pods: 0
May 14 12:10:37.686: INFO: Node kali-worker is running more than one daemon pod
May 14 12:10:38.684: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:10:39.075: INFO: Number of nodes with available pods: 0
May 14 12:10:39.075: INFO: Node kali-worker is running more than one daemon pod
May 14 12:10:40.704: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:10:41.073: INFO: Number of nodes with available pods: 0
May 14 12:10:41.073: INFO: Node kali-worker is running more than one daemon pod
May 14 12:10:41.694: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:10:42.347: INFO: Number of nodes with available pods: 0
May 14 12:10:42.347: INFO: Node kali-worker is running more than one daemon pod
May 14 12:10:42.715: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:10:43.170: INFO: Number of nodes with available pods: 0
May 14 12:10:43.170: INFO: Node kali-worker is running more than one daemon pod
May 14 12:10:43.728: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:10:43.750: INFO: Number of nodes with available pods: 0
May 14 12:10:43.750: INFO: Node kali-worker is running more than one daemon pod
May 14 12:10:44.719: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:10:44.743: INFO: Number of nodes with available pods: 2
May 14 12:10:44.743: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
May 14 12:10:44.787: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:10:44.790: INFO: Number of nodes with available pods: 1
May 14 12:10:44.790: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:10:45.830: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:10:45.844: INFO: Number of nodes with available pods: 1
May 14 12:10:45.844: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:10:46.795: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:10:46.798: INFO: Number of nodes with available pods: 1
May 14 12:10:46.798: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:10:47.795: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:10:47.797: INFO: Number of nodes with available pods: 1
May 14 12:10:47.797: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:10:49.099: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:10:49.131: INFO: Number of nodes with available pods: 1
May 14 12:10:49.131: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:10:49.796: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:10:49.799: INFO: Number of nodes with available pods: 1
May 14 12:10:49.799: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:10:50.938: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:10:50.941: INFO: Number of nodes with available pods: 1
May 14 12:10:50.941: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:10:51.795: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:10:51.799: INFO: Number of nodes with available pods: 1
May 14 12:10:51.799: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:10:52.795: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:10:52.799: INFO: Number of nodes with available pods: 1
May 14 12:10:52.799: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:10:53.795: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:10:53.799: INFO: Number of nodes with available pods: 1
May 14 12:10:53.799: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:10:54.794: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:10:54.798: INFO: Number of nodes with available pods: 1
May 14 12:10:54.798: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:10:55.795: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:10:55.804: INFO: Number of nodes with available pods: 1
May 14 12:10:55.804: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:10:56.808: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:10:56.811: INFO: Number of nodes with available pods: 1
May 14 12:10:56.811: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:10:57.795: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:10:57.798: INFO: Number of nodes with available pods: 1
May 14 12:10:57.798: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:10:58.798: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:10:58.802: INFO: Number of nodes with available pods: 1
May 14 12:10:58.802: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:10:59.795: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:10:59.798: INFO: Number of nodes with available pods: 1
May 14 12:10:59.798: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:11:00.795: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:11:00.798: INFO: Number of nodes with available pods: 1
May 14 12:11:00.798: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:11:01.795: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:11:01.798: INFO: Number of nodes with available pods: 1
May 14 12:11:01.798: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:11:02.795: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:11:02.798: INFO: Number of nodes with available pods: 1
May 14 12:11:02.798: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:11:03.812: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:11:03.814: INFO: Number of nodes with available pods: 1
May 14 12:11:03.814: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:11:04.794: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:11:04.796: INFO: Number of nodes with available pods: 1
May 14 12:11:04.796: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:11:06.590: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:11:06.593: INFO: Number of nodes with available pods: 1
May 14 12:11:06.593: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:11:06.804: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:11:06.808: INFO: Number of nodes with available pods: 1
May 14 12:11:06.808: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:11:07.804: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:11:08.835: INFO: Number of nodes with available pods: 1
May 14 12:11:08.835: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:11:09.795: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:11:09.799: INFO: Number of nodes with available pods: 1
May 14 12:11:09.799: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:11:10.795: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:11:10.799: INFO: Number of nodes with available pods: 1
May 14 12:11:10.799: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:11:12.010: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:11:12.014: INFO: Number of nodes with available pods: 1
May 14 12:11:12.014: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:11:12.795: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:11:12.799: INFO: Number of nodes with available pods: 1
May 14 12:11:12.799: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:11:13.795: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:11:13.798: INFO: Number of nodes with available pods: 1
May 14 12:11:13.798: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:11:14.795: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:11:14.798: INFO: Number of nodes with available pods: 1
May 14 12:11:14.798: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:11:15.794: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:11:15.796: INFO: Number of nodes with available pods: 1
May 14 12:11:15.796: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:11:16.794: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:11:16.796: INFO: Number of nodes with available pods: 1
May 14 12:11:16.796: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:11:18.165: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:11:18.167: INFO: Number of nodes with available pods: 1
May 14 12:11:18.167: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:11:18.812: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:11:18.816: INFO: Number of nodes with available pods: 1
May 14 12:11:18.816: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:11:19.795: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:11:19.799: INFO: Number of nodes with available pods: 1
May 14 12:11:19.799: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:11:21.130: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:11:21.132: INFO: Number of nodes with available pods: 1
May 14 12:11:21.132: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:11:21.821: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:11:21.823: INFO: Number of nodes with available pods: 1
May 14 12:11:21.823: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:11:22.795: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:11:22.799: INFO: Number of nodes with available pods: 1
May 14 12:11:22.799: INFO: Node kali-worker2 is running more than one daemon pod
May 14 12:11:23.818: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May 14 12:11:23.851: INFO: Number of nodes with available pods: 2
May 14 12:11:23.851: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9492, will wait for the garbage collector to delete the pods
May 14 12:11:23.909: INFO: Deleting DaemonSet.extensions daemon-set took: 3.811626ms
May 14 12:11:24.209: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.338988ms
May 14 12:11:33.812: INFO: Number of nodes with available pods: 0
May 14 12:11:33.812: INFO: Number of running nodes: 0, number of available pods: 0
May 14 12:11:33.814: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9492/daemonsets","resourceVersion":"4289634"},"items":null}

May 14 12:11:33.815: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9492/pods","resourceVersion":"4289634"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:11:33.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9492" for this suite.

• [SLOW TEST:57.469 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":231,"skipped":3975,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:11:33.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
May 14 12:11:34.998: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
May 14 12:11:37.007: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725055095, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725055095, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725055095, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725055094, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 14 12:11:39.038: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725055095, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725055095, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725055095, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725055094, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 14 12:11:42.043: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 14 12:11:42.047: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:11:43.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-6596" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137

• [SLOW TEST:9.537 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":232,"skipped":3993,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:11:43.366: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod busybox-09428c4e-6679-480d-b16b-2ec2db2cd34d in namespace container-probe-6069
May 14 12:11:49.530: INFO: Started pod busybox-09428c4e-6679-480d-b16b-2ec2db2cd34d in namespace container-probe-6069
STEP: checking the pod's current state and verifying that restartCount is present
May 14 12:11:49.533: INFO: Initial restart count of pod busybox-09428c4e-6679-480d-b16b-2ec2db2cd34d is 0
May 14 12:13:00.711: INFO: Restart count of pod container-probe-6069/busybox-09428c4e-6679-480d-b16b-2ec2db2cd34d is now 1 (1m11.178490873s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:13:00.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6069" for this suite.

• [SLOW TEST:77.441 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":233,"skipped":4004,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:13:00.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
May 14 12:13:00.856: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
May 14 12:13:00.916: INFO: Waiting for terminating namespaces to be deleted...
May 14 12:13:00.919: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
May 14 12:13:00.936: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 14 12:13:00.936: INFO: 	Container kube-proxy ready: true, restart count 0
May 14 12:13:00.936: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 14 12:13:00.936: INFO: 	Container kindnet-cni ready: true, restart count 1
May 14 12:13:00.936: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
May 14 12:13:00.954: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 14 12:13:00.954: INFO: 	Container kindnet-cni ready: true, restart count 0
May 14 12:13:00.954: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May 14 12:13:00.954: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-7f4652be-eaeb-4a46-a2cf-78706881dc4a 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-7f4652be-eaeb-4a46-a2cf-78706881dc4a off the node kali-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-7f4652be-eaeb-4a46-a2cf-78706881dc4a
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:13:11.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-71" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:10.362 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":275,"completed":234,"skipped":4014,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:13:11.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 14 12:13:11.508: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config version'
May 14 12:13:11.733: INFO: stderr: ""
May 14 12:13:11.733: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:24:20Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-04-28T05:35:31Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:13:11.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4032" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":275,"completed":235,"skipped":4024,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:13:11.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 14 12:13:12.005: INFO: Waiting up to 5m0s for pod "downwardapi-volume-64c5d8e0-c6b0-44a6-ad32-5ad55bf717f2" in namespace "downward-api-5647" to be "Succeeded or Failed"
May 14 12:13:12.026: INFO: Pod "downwardapi-volume-64c5d8e0-c6b0-44a6-ad32-5ad55bf717f2": Phase="Pending", Reason="", readiness=false. Elapsed: 21.104965ms
May 14 12:13:14.029: INFO: Pod "downwardapi-volume-64c5d8e0-c6b0-44a6-ad32-5ad55bf717f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024211192s
May 14 12:13:16.032: INFO: Pod "downwardapi-volume-64c5d8e0-c6b0-44a6-ad32-5ad55bf717f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027256951s
STEP: Saw pod success
May 14 12:13:16.032: INFO: Pod "downwardapi-volume-64c5d8e0-c6b0-44a6-ad32-5ad55bf717f2" satisfied condition "Succeeded or Failed"
May 14 12:13:16.035: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-64c5d8e0-c6b0-44a6-ad32-5ad55bf717f2 container client-container: 
STEP: delete the pod
May 14 12:13:16.148: INFO: Waiting for pod downwardapi-volume-64c5d8e0-c6b0-44a6-ad32-5ad55bf717f2 to disappear
May 14 12:13:16.160: INFO: Pod downwardapi-volume-64c5d8e0-c6b0-44a6-ad32-5ad55bf717f2 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:13:16.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5647" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":236,"skipped":4071,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:13:16.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 14 12:13:17.647: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 14 12:13:21.955: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725055197, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725055197, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725055197, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725055197, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 14 12:13:24.989: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:13:35.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5443" for this suite.
STEP: Destroying namespace "webhook-5443-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:19.051 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":237,"skipped":4076,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:13:35.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 14 12:13:35.305: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
May 14 12:13:39.046: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:13:41.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5869" for this suite.

• [SLOW TEST:6.255 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":238,"skipped":4112,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:13:41.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
May 14 12:13:59.011: INFO: 5 pods remaining
May 14 12:13:59.011: INFO: 5 pods has nil DeletionTimestamp
May 14 12:13:59.011: INFO: 
STEP: Gathering metrics
W0514 12:14:04.000221       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May 14 12:14:04.000: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:14:04.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-804" for this suite.

• [SLOW TEST:23.563 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":239,"skipped":4145,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:14:05.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-9520
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-9520
I0514 12:14:07.830873       7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-9520, replica count: 2
I0514 12:14:10.881762       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0514 12:14:13.882018       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0514 12:14:16.882234       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May 14 12:14:16.882: INFO: Creating new exec pod
May 14 12:14:25.271: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-9520 execpodpvqwc -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
May 14 12:14:25.459: INFO: stderr: "I0514 12:14:25.377967    2407 log.go:172] (0xc000bc80b0) (0xc0007ba000) Create stream\nI0514 12:14:25.378024    2407 log.go:172] (0xc000bc80b0) (0xc0007ba000) Stream added, broadcasting: 1\nI0514 12:14:25.380345    2407 log.go:172] (0xc000bc80b0) Reply frame received for 1\nI0514 12:14:25.380368    2407 log.go:172] (0xc000bc80b0) (0xc00001d5e0) Create stream\nI0514 12:14:25.380375    2407 log.go:172] (0xc000bc80b0) (0xc00001d5e0) Stream added, broadcasting: 3\nI0514 12:14:25.381023    2407 log.go:172] (0xc000bc80b0) Reply frame received for 3\nI0514 12:14:25.381049    2407 log.go:172] (0xc000bc80b0) (0xc0007ba0a0) Create stream\nI0514 12:14:25.381061    2407 log.go:172] (0xc000bc80b0) (0xc0007ba0a0) Stream added, broadcasting: 5\nI0514 12:14:25.382059    2407 log.go:172] (0xc000bc80b0) Reply frame received for 5\nI0514 12:14:25.451849    2407 log.go:172] (0xc000bc80b0) Data frame received for 5\nI0514 12:14:25.451881    2407 log.go:172] (0xc0007ba0a0) (5) Data frame handling\nI0514 12:14:25.451900    2407 log.go:172] (0xc0007ba0a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0514 12:14:25.452218    2407 log.go:172] (0xc000bc80b0) Data frame received for 5\nI0514 12:14:25.452235    2407 log.go:172] (0xc0007ba0a0) (5) Data frame handling\nI0514 12:14:25.452245    2407 log.go:172] (0xc0007ba0a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0514 12:14:25.452577    2407 log.go:172] (0xc000bc80b0) Data frame received for 3\nI0514 12:14:25.452619    2407 log.go:172] (0xc00001d5e0) (3) Data frame handling\nI0514 12:14:25.452641    2407 log.go:172] (0xc000bc80b0) Data frame received for 5\nI0514 12:14:25.452647    2407 log.go:172] (0xc0007ba0a0) (5) Data frame handling\nI0514 12:14:25.454676    2407 log.go:172] (0xc000bc80b0) Data frame received for 1\nI0514 12:14:25.454690    2407 log.go:172] (0xc0007ba000) (1) Data frame handling\nI0514 12:14:25.454737    2407 log.go:172] (0xc0007ba000) (1) Data frame sent\nI0514 12:14:25.454778    2407 log.go:172] (0xc000bc80b0) (0xc0007ba000) Stream removed, broadcasting: 1\nI0514 12:14:25.454828    2407 log.go:172] (0xc000bc80b0) Go away received\nI0514 12:14:25.455153    2407 log.go:172] (0xc000bc80b0) (0xc0007ba000) Stream removed, broadcasting: 1\nI0514 12:14:25.455166    2407 log.go:172] (0xc000bc80b0) (0xc00001d5e0) Stream removed, broadcasting: 3\nI0514 12:14:25.455176    2407 log.go:172] (0xc000bc80b0) (0xc0007ba0a0) Stream removed, broadcasting: 5\n"
May 14 12:14:25.459: INFO: stdout: ""
May 14 12:14:25.459: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-9520 execpodpvqwc -- /bin/sh -x -c nc -zv -t -w 2 10.101.55.231 80'
May 14 12:14:25.725: INFO: stderr: "I0514 12:14:25.642818    2426 log.go:172] (0xc0000ea790) (0xc0007a2000) Create stream\nI0514 12:14:25.642876    2426 log.go:172] (0xc0000ea790) (0xc0007a2000) Stream added, broadcasting: 1\nI0514 12:14:25.649284    2426 log.go:172] (0xc0000ea790) Reply frame received for 1\nI0514 12:14:25.649318    2426 log.go:172] (0xc0000ea790) (0xc00079c140) Create stream\nI0514 12:14:25.649328    2426 log.go:172] (0xc0000ea790) (0xc00079c140) Stream added, broadcasting: 3\nI0514 12:14:25.650285    2426 log.go:172] (0xc0000ea790) Reply frame received for 3\nI0514 12:14:25.650326    2426 log.go:172] (0xc0000ea790) (0xc0007e0000) Create stream\nI0514 12:14:25.650341    2426 log.go:172] (0xc0000ea790) (0xc0007e0000) Stream added, broadcasting: 5\nI0514 12:14:25.651371    2426 log.go:172] (0xc0000ea790) Reply frame received for 5\nI0514 12:14:25.721410    2426 log.go:172] (0xc0000ea790) Data frame received for 3\nI0514 12:14:25.721436    2426 log.go:172] (0xc00079c140) (3) Data frame handling\nI0514 12:14:25.721458    2426 log.go:172] (0xc0000ea790) Data frame received for 5\nI0514 12:14:25.721467    2426 log.go:172] (0xc0007e0000) (5) Data frame handling\nI0514 12:14:25.721472    2426 log.go:172] (0xc0007e0000) (5) Data frame sent\nI0514 12:14:25.721476    2426 log.go:172] (0xc0000ea790) Data frame received for 5\nI0514 12:14:25.721480    2426 log.go:172] (0xc0007e0000) (5) Data frame handling\n+ nc -zv -t -w 2 10.101.55.231 80\nConnection to 10.101.55.231 80 port [tcp/http] succeeded!\nI0514 12:14:25.722274    2426 log.go:172] (0xc0000ea790) Data frame received for 1\nI0514 12:14:25.722283    2426 log.go:172] (0xc0007a2000) (1) Data frame handling\nI0514 12:14:25.722289    2426 log.go:172] (0xc0007a2000) (1) Data frame sent\nI0514 12:14:25.722294    2426 log.go:172] (0xc0000ea790) (0xc0007a2000) Stream removed, broadcasting: 1\nI0514 12:14:25.722462    2426 log.go:172] (0xc0000ea790) (0xc0007a2000) Stream removed, broadcasting: 1\nI0514 12:14:25.722472    2426 log.go:172] (0xc0000ea790) (0xc00079c140) Stream removed, broadcasting: 3\nI0514 12:14:25.722477    2426 log.go:172] (0xc0000ea790) (0xc0007e0000) Stream removed, broadcasting: 5\n"
May 14 12:14:25.725: INFO: stdout: ""
May 14 12:14:25.725: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:14:25.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9520" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:20.724 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":240,"skipped":4151,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:14:25.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-04d71c3a-d811-4ce4-b2a3-ceb8618b7c9a
STEP: Creating a pod to test consume configMaps
May 14 12:14:25.876: INFO: Waiting up to 5m0s for pod "pod-configmaps-542d9246-9b5b-410a-ab6d-aa47acb7ca7c" in namespace "configmap-9332" to be "Succeeded or Failed"
May 14 12:14:25.896: INFO: Pod "pod-configmaps-542d9246-9b5b-410a-ab6d-aa47acb7ca7c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.134761ms
May 14 12:14:27.950: INFO: Pod "pod-configmaps-542d9246-9b5b-410a-ab6d-aa47acb7ca7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073760926s
May 14 12:14:30.504: INFO: Pod "pod-configmaps-542d9246-9b5b-410a-ab6d-aa47acb7ca7c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.628405458s
May 14 12:14:33.568: INFO: Pod "pod-configmaps-542d9246-9b5b-410a-ab6d-aa47acb7ca7c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.692412902s
May 14 12:14:36.327: INFO: Pod "pod-configmaps-542d9246-9b5b-410a-ab6d-aa47acb7ca7c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.451462732s
May 14 12:14:39.563: INFO: Pod "pod-configmaps-542d9246-9b5b-410a-ab6d-aa47acb7ca7c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.68741409s
May 14 12:14:41.658: INFO: Pod "pod-configmaps-542d9246-9b5b-410a-ab6d-aa47acb7ca7c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.781784401s
May 14 12:14:43.944: INFO: Pod "pod-configmaps-542d9246-9b5b-410a-ab6d-aa47acb7ca7c": Phase="Pending", Reason="", readiness=false. Elapsed: 18.067952226s
May 14 12:14:46.959: INFO: Pod "pod-configmaps-542d9246-9b5b-410a-ab6d-aa47acb7ca7c": Phase="Pending", Reason="", readiness=false. Elapsed: 21.082863973s
May 14 12:14:50.682: INFO: Pod "pod-configmaps-542d9246-9b5b-410a-ab6d-aa47acb7ca7c": Phase="Pending", Reason="", readiness=false. Elapsed: 24.806177251s
May 14 12:14:52.716: INFO: Pod "pod-configmaps-542d9246-9b5b-410a-ab6d-aa47acb7ca7c": Phase="Pending", Reason="", readiness=false. Elapsed: 26.840232595s
May 14 12:14:55.012: INFO: Pod "pod-configmaps-542d9246-9b5b-410a-ab6d-aa47acb7ca7c": Phase="Pending", Reason="", readiness=false. Elapsed: 29.135566787s
May 14 12:14:57.598: INFO: Pod "pod-configmaps-542d9246-9b5b-410a-ab6d-aa47acb7ca7c": Phase="Pending", Reason="", readiness=false. Elapsed: 31.721620358s
May 14 12:14:59.601: INFO: Pod "pod-configmaps-542d9246-9b5b-410a-ab6d-aa47acb7ca7c": Phase="Pending", Reason="", readiness=false. Elapsed: 33.725314659s
May 14 12:15:01.777: INFO: Pod "pod-configmaps-542d9246-9b5b-410a-ab6d-aa47acb7ca7c": Phase="Pending", Reason="", readiness=false. Elapsed: 35.900913616s
May 14 12:15:03.962: INFO: Pod "pod-configmaps-542d9246-9b5b-410a-ab6d-aa47acb7ca7c": Phase="Pending", Reason="", readiness=false. Elapsed: 38.086420195s
May 14 12:15:05.965: INFO: Pod "pod-configmaps-542d9246-9b5b-410a-ab6d-aa47acb7ca7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.088810493s
STEP: Saw pod success
May 14 12:15:05.965: INFO: Pod "pod-configmaps-542d9246-9b5b-410a-ab6d-aa47acb7ca7c" satisfied condition "Succeeded or Failed"
May 14 12:15:05.967: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-542d9246-9b5b-410a-ab6d-aa47acb7ca7c container configmap-volume-test: 
STEP: delete the pod
May 14 12:15:06.101: INFO: Waiting for pod pod-configmaps-542d9246-9b5b-410a-ab6d-aa47acb7ca7c to disappear
May 14 12:15:06.109: INFO: Pod pod-configmaps-542d9246-9b5b-410a-ab6d-aa47acb7ca7c no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:15:06.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9332" for this suite.

• [SLOW TEST:40.353 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":241,"skipped":4158,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:15:06.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:15:19.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4372" for this suite.

• [SLOW TEST:13.620 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":242,"skipped":4159,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:15:19.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:15:36.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1473" for this suite.

• [SLOW TEST:17.132 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":243,"skipped":4189,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:15:36.868: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 14 12:15:37.262: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"4d7222fd-2135-429f-acc5-54b76d6cafdf", Controller:(*bool)(0xc003f64442), BlockOwnerDeletion:(*bool)(0xc003f64443)}}
May 14 12:15:37.301: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"be9db7da-207c-4078-ad8d-8a4a6b1f2171", Controller:(*bool)(0xc00512fe9a), BlockOwnerDeletion:(*bool)(0xc00512fe9b)}}
May 14 12:15:37.326: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"58674230-4452-4b18-8b8d-61e43ab7a31e", Controller:(*bool)(0xc003e69322), BlockOwnerDeletion:(*bool)(0xc003e69323)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:15:42.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6016" for this suite.

• [SLOW TEST:6.185 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":244,"skipped":4197,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:15:43.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-upd-e127a94e-9766-4216-a14b-4e21abef613e
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-e127a94e-9766-4216-a14b-4e21abef613e
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:16:07.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7323" for this suite.

• [SLOW TEST:24.245 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":245,"skipped":4228,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:16:07.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
May 14 12:17:17.478: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7576 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 14 12:17:17.478: INFO: >>> kubeConfig: /root/.kube/config
I0514 12:17:17.515116       7 log.go:172] (0xc001e582c0) (0xc001181c20) Create stream
I0514 12:17:17.515151       7 log.go:172] (0xc001e582c0) (0xc001181c20) Stream added, broadcasting: 1
I0514 12:17:17.516704       7 log.go:172] (0xc001e582c0) Reply frame received for 1
I0514 12:17:17.516732       7 log.go:172] (0xc001e582c0) (0xc0010d6140) Create stream
I0514 12:17:17.516739       7 log.go:172] (0xc001e582c0) (0xc0010d6140) Stream added, broadcasting: 3
I0514 12:17:17.517555       7 log.go:172] (0xc001e582c0) Reply frame received for 3
I0514 12:17:17.517591       7 log.go:172] (0xc001e582c0) (0xc001181cc0) Create stream
I0514 12:17:17.517605       7 log.go:172] (0xc001e582c0) (0xc001181cc0) Stream added, broadcasting: 5
I0514 12:17:17.518392       7 log.go:172] (0xc001e582c0) Reply frame received for 5
I0514 12:17:17.560552       7 log.go:172] (0xc001e582c0) Data frame received for 5
I0514 12:17:17.560576       7 log.go:172] (0xc001181cc0) (5) Data frame handling
I0514 12:17:17.560596       7 log.go:172] (0xc001e582c0) Data frame received for 3
I0514 12:17:17.560606       7 log.go:172] (0xc0010d6140) (3) Data frame handling
I0514 12:17:17.560620       7 log.go:172] (0xc0010d6140) (3) Data frame sent
I0514 12:17:17.560631       7 log.go:172] (0xc001e582c0) Data frame received for 3
I0514 12:17:17.560641       7 log.go:172] (0xc0010d6140) (3) Data frame handling
I0514 12:17:17.561785       7 log.go:172] (0xc001e582c0) Data frame received for 1
I0514 12:17:17.561798       7 log.go:172] (0xc001181c20) (1) Data frame handling
I0514 12:17:17.561805       7 log.go:172] (0xc001181c20) (1) Data frame sent
I0514 12:17:17.561813       7 log.go:172] (0xc001e582c0) (0xc001181c20) Stream removed, broadcasting: 1
I0514 12:17:17.561876       7 log.go:172] (0xc001e582c0) Go away received
I0514 12:17:17.561901       7 log.go:172] (0xc001e582c0) (0xc001181c20) Stream removed, broadcasting: 1
I0514 12:17:17.561927       7 log.go:172] (0xc001e582c0) (0xc0010d6140) Stream removed, broadcasting: 3
I0514 12:17:17.561939       7 log.go:172] (0xc001e582c0) (0xc001181cc0) Stream removed, broadcasting: 5
May 14 12:17:17.561: INFO: Exec stderr: ""
May 14 12:17:17.561: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7576 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 14 12:17:17.561: INFO: >>> kubeConfig: /root/.kube/config
I0514 12:17:17.587393       7 log.go:172] (0xc001e242c0) (0xc001e89400) Create stream
I0514 12:17:17.587410       7 log.go:172] (0xc001e242c0) (0xc001e89400) Stream added, broadcasting: 1
I0514 12:17:17.588619       7 log.go:172] (0xc001e242c0) Reply frame received for 1
I0514 12:17:17.588655       7 log.go:172] (0xc001e242c0) (0xc0023e8b40) Create stream
I0514 12:17:17.588667       7 log.go:172] (0xc001e242c0) (0xc0023e8b40) Stream added, broadcasting: 3
I0514 12:17:17.589648       7 log.go:172] (0xc001e242c0) Reply frame received for 3
I0514 12:17:17.589677       7 log.go:172] (0xc001e242c0) (0xc0023e8fa0) Create stream
I0514 12:17:17.589688       7 log.go:172] (0xc001e242c0) (0xc0023e8fa0) Stream added, broadcasting: 5
I0514 12:17:17.590351       7 log.go:172] (0xc001e242c0) Reply frame received for 5
I0514 12:17:17.643862       7 log.go:172] (0xc001e242c0) Data frame received for 5
I0514 12:17:17.643885       7 log.go:172] (0xc0023e8fa0) (5) Data frame handling
I0514 12:17:17.643913       7 log.go:172] (0xc001e242c0) Data frame received for 3
I0514 12:17:17.643955       7 log.go:172] (0xc0023e8b40) (3) Data frame handling
I0514 12:17:17.643977       7 log.go:172] (0xc0023e8b40) (3) Data frame sent
I0514 12:17:17.643996       7 log.go:172] (0xc001e242c0) Data frame received for 3
I0514 12:17:17.644013       7 log.go:172] (0xc0023e8b40) (3) Data frame handling
I0514 12:17:17.645355       7 log.go:172] (0xc001e242c0) Data frame received for 1
I0514 12:17:17.645591       7 log.go:172] (0xc001e89400) (1) Data frame handling
I0514 12:17:17.645671       7 log.go:172] (0xc001e89400) (1) Data frame sent
I0514 12:17:17.645713       7 log.go:172] (0xc001e242c0) (0xc001e89400) Stream removed, broadcasting: 1
I0514 12:17:17.645828       7 log.go:172] (0xc001e242c0) (0xc001e89400) Stream removed, broadcasting: 1
I0514 12:17:17.645873       7 log.go:172] (0xc001e242c0) (0xc0023e8b40) Stream removed, broadcasting: 3
I0514 12:17:17.645903       7 log.go:172] (0xc001e242c0) (0xc0023e8fa0) Stream removed, broadcasting: 5
May 14 12:17:17.645: INFO: Exec stderr: ""
May 14 12:17:17.645: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7576 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
I0514 12:17:17.645972       7 log.go:172] (0xc001e242c0) Go away received
May 14 12:17:17.645: INFO: >>> kubeConfig: /root/.kube/config
I0514 12:17:17.674872       7 log.go:172] (0xc0020d7c30) (0xc0023e9220) Create stream
I0514 12:17:17.674903       7 log.go:172] (0xc0020d7c30) (0xc0023e9220) Stream added, broadcasting: 1
I0514 12:17:17.676517       7 log.go:172] (0xc0020d7c30) Reply frame received for 1
I0514 12:17:17.676548       7 log.go:172] (0xc0020d7c30) (0xc0010d66e0) Create stream
I0514 12:17:17.676560       7 log.go:172] (0xc0020d7c30) (0xc0010d66e0) Stream added, broadcasting: 3
I0514 12:17:17.677606       7 log.go:172] (0xc0020d7c30) Reply frame received for 3
I0514 12:17:17.677655       7 log.go:172] (0xc0020d7c30) (0xc001181e00) Create stream
I0514 12:17:17.677668       7 log.go:172] (0xc0020d7c30) (0xc001181e00) Stream added, broadcasting: 5
I0514 12:17:17.678363       7 log.go:172] (0xc0020d7c30) Reply frame received for 5
I0514 12:17:17.724818       7 log.go:172] (0xc0020d7c30) Data frame received for 3
I0514 12:17:17.724838       7 log.go:172] (0xc0010d66e0) (3) Data frame handling
I0514 12:17:17.724849       7 log.go:172] (0xc0010d66e0) (3) Data frame sent
I0514 12:17:17.724926       7 log.go:172] (0xc0020d7c30) Data frame received for 5
I0514 12:17:17.724940       7 log.go:172] (0xc001181e00) (5) Data frame handling
I0514 12:17:17.724957       7 log.go:172] (0xc0020d7c30) Data frame received for 3
I0514 12:17:17.724966       7 log.go:172] (0xc0010d66e0) (3) Data frame handling
I0514 12:17:17.726002       7 log.go:172] (0xc0020d7c30) Data frame received for 1
I0514 12:17:17.726021       7 log.go:172] (0xc0023e9220) (1) Data frame handling
I0514 12:17:17.726034       7 log.go:172] (0xc0023e9220) (1) Data frame sent
I0514 12:17:17.726044       7 log.go:172] (0xc0020d7c30) (0xc0023e9220) Stream removed, broadcasting: 1
I0514 12:17:17.726091       7 log.go:172] (0xc0020d7c30) (0xc0023e9220) Stream removed, broadcasting: 1
I0514 12:17:17.726103       7 log.go:172] (0xc0020d7c30) (0xc0010d66e0) Stream removed, broadcasting: 3
I0514 12:17:17.726115       7 log.go:172] (0xc0020d7c30) (0xc001181e00) Stream removed, broadcasting: 5
May 14 12:17:17.726: INFO: Exec stderr: ""
May 14 12:17:17.726: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7576 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 14 12:17:17.726: INFO: >>> kubeConfig: /root/.kube/config
I0514 12:17:17.726275       7 log.go:172] (0xc0020d7c30) Go away received
I0514 12:17:17.755337       7 log.go:172] (0xc001c982c0) (0xc0023e94a0) Create stream
I0514 12:17:17.755377       7 log.go:172] (0xc001c982c0) (0xc0023e94a0) Stream added, broadcasting: 1
I0514 12:17:17.756945       7 log.go:172] (0xc001c982c0) Reply frame received for 1
I0514 12:17:17.756986       7 log.go:172] (0xc001c982c0) (0xc001e894a0) Create stream
I0514 12:17:17.757001       7 log.go:172] (0xc001c982c0) (0xc001e894a0) Stream added, broadcasting: 3
I0514 12:17:17.758121       7 log.go:172] (0xc001c982c0) Reply frame received for 3
I0514 12:17:17.758152       7 log.go:172] (0xc001c982c0) (0xc0023e9540) Create stream
I0514 12:17:17.758165       7 log.go:172] (0xc001c982c0) (0xc0023e9540) Stream added, broadcasting: 5
I0514 12:17:17.759026       7 log.go:172] (0xc001c982c0) Reply frame received for 5
I0514 12:17:17.807231       7 log.go:172] (0xc001c982c0) Data frame received for 5
I0514 12:17:17.807253       7 log.go:172] (0xc0023e9540) (5) Data frame handling
I0514 12:17:17.807275       7 log.go:172] (0xc001c982c0) Data frame received for 3
I0514 12:17:17.807302       7 log.go:172] (0xc001e894a0) (3) Data frame handling
I0514 12:17:17.807327       7 log.go:172] (0xc001e894a0) (3) Data frame sent
I0514 12:17:17.807345       7 log.go:172] (0xc001c982c0) Data frame received for 3
I0514 12:17:17.807402       7 log.go:172] (0xc001e894a0) (3) Data frame handling
I0514 12:17:17.808562       7 log.go:172] (0xc001c982c0) Data frame received for 1
I0514 12:17:17.808584       7 log.go:172] (0xc0023e94a0) (1) Data frame handling
I0514 12:17:17.808594       7 log.go:172] (0xc0023e94a0) (1) Data frame sent
I0514 12:17:17.808607       7 log.go:172] (0xc001c982c0) (0xc0023e94a0) Stream removed, broadcasting: 1
I0514 12:17:17.808623       7 log.go:172] (0xc001c982c0) Go away received
I0514 12:17:17.808864       7 log.go:172] (0xc001c982c0) (0xc0023e94a0) Stream removed, broadcasting: 1
I0514 12:17:17.808877       7 log.go:172] (0xc001c982c0) (0xc001e894a0) Stream removed, broadcasting: 3
I0514 12:17:17.808885       7 log.go:172] (0xc001c982c0) (0xc0023e9540) Stream removed, broadcasting: 5
May 14 12:17:17.808: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
May 14 12:17:17.808: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7576 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 14 12:17:17.808: INFO: >>> kubeConfig: /root/.kube/config
I0514 12:17:17.835149       7 log.go:172] (0xc001c988f0) (0xc0023e97c0) Create stream
I0514 12:17:17.835176       7 log.go:172] (0xc001c988f0) (0xc0023e97c0) Stream added, broadcasting: 1
I0514 12:17:17.836699       7 log.go:172] (0xc001c988f0) Reply frame received for 1
I0514 12:17:17.836728       7 log.go:172] (0xc001c988f0) (0xc0010d6a00) Create stream
I0514 12:17:17.836738       7 log.go:172] (0xc001c988f0) (0xc0010d6a00) Stream added, broadcasting: 3
I0514 12:17:17.837477       7 log.go:172] (0xc001c988f0) Reply frame received for 3
I0514 12:17:17.837500       7 log.go:172] (0xc001c988f0) (0xc0010d6d20) Create stream
I0514 12:17:17.837509       7 log.go:172] (0xc001c988f0) (0xc0010d6d20) Stream added, broadcasting: 5
I0514 12:17:17.838172       7 log.go:172] (0xc001c988f0) Reply frame received for 5
I0514 12:17:17.894577       7 log.go:172] (0xc001c988f0) Data frame received for 5
I0514 12:17:17.894603       7 log.go:172] (0xc0010d6d20) (5) Data frame handling
I0514 12:17:17.894624       7 log.go:172] (0xc001c988f0) Data frame received for 3
I0514 12:17:17.894652       7 log.go:172] (0xc0010d6a00) (3) Data frame handling
I0514 12:17:17.894692       7 log.go:172] (0xc0010d6a00) (3) Data frame sent
I0514 12:17:17.894710       7 log.go:172] (0xc001c988f0) Data frame received for 3
I0514 12:17:17.894724       7 log.go:172] (0xc0010d6a00) (3) Data frame handling
I0514 12:17:17.895686       7 log.go:172] (0xc001c988f0) Data frame received for 1
I0514 12:17:17.895699       7 log.go:172] (0xc0023e97c0) (1) Data frame handling
I0514 12:17:17.895708       7 log.go:172] (0xc0023e97c0) (1) Data frame sent
I0514 12:17:17.895718       7 log.go:172] (0xc001c988f0) (0xc0023e97c0) Stream removed, broadcasting: 1
I0514 12:17:17.895838       7 log.go:172] (0xc001c988f0) (0xc0023e97c0) Stream removed, broadcasting: 1
I0514 12:17:17.895853       7 log.go:172] (0xc001c988f0) (0xc0010d6a00) Stream removed, broadcasting: 3
I0514 12:17:17.895963       7 log.go:172] (0xc001c988f0) (0xc0010d6d20) Stream removed, broadcasting: 5
May 14 12:17:17.896: INFO: Exec stderr: ""
May 14 12:17:17.896: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7576 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 14 12:17:17.896: INFO: >>> kubeConfig: /root/.kube/config
I0514 12:17:17.898030       7 log.go:172] (0xc001c988f0) Go away received
I0514 12:17:17.922394       7 log.go:172] (0xc001e24630) (0xc001e895e0) Create stream
I0514 12:17:17.922416       7 log.go:172] (0xc001e24630) (0xc001e895e0) Stream added, broadcasting: 1
I0514 12:17:17.924339       7 log.go:172] (0xc001e24630) Reply frame received for 1
I0514 12:17:17.924372       7 log.go:172] (0xc001e24630) (0xc0010d7040) Create stream
I0514 12:17:17.924383       7 log.go:172] (0xc001e24630) (0xc0010d7040) Stream added, broadcasting: 3
I0514 12:17:17.925069       7 log.go:172] (0xc001e24630) Reply frame received for 3
I0514 12:17:17.925083       7 log.go:172] (0xc001e24630) (0xc001e89680) Create stream
I0514 12:17:17.925087       7 log.go:172] (0xc001e24630) (0xc001e89680) Stream added, broadcasting: 5
I0514 12:17:17.925728       7 log.go:172] (0xc001e24630) Reply frame received for 5
I0514 12:17:17.976769       7 log.go:172] (0xc001e24630) Data frame received for 5
I0514 12:17:17.976793       7 log.go:172] (0xc001e89680) (5) Data frame handling
I0514 12:17:17.976824       7 log.go:172] (0xc001e24630) Data frame received for 3
I0514 12:17:17.976843       7 log.go:172] (0xc0010d7040) (3) Data frame handling
I0514 12:17:17.976856       7 log.go:172] (0xc0010d7040) (3) Data frame sent
I0514 12:17:17.976864       7 log.go:172] (0xc001e24630) Data frame received for 3
I0514 12:17:17.976873       7 log.go:172] (0xc0010d7040) (3) Data frame handling
I0514 12:17:17.977555       7 log.go:172] (0xc001e24630) Data frame received for 1
I0514 12:17:17.977569       7 log.go:172] (0xc001e895e0) (1) Data frame handling
I0514 12:17:17.977580       7 log.go:172] (0xc001e895e0) (1) Data frame sent
I0514 12:17:17.977596       7 log.go:172] (0xc001e24630) (0xc001e895e0) Stream removed, broadcasting: 1
I0514 12:17:17.977616       7 log.go:172] (0xc001e24630) Go away received
I0514 12:17:17.977697       7 log.go:172] (0xc001e24630) (0xc001e895e0) Stream removed, broadcasting: 1
I0514 12:17:17.977709       7 log.go:172] (0xc001e24630) (0xc0010d7040) Stream removed, broadcasting: 3
I0514 12:17:17.977716       7 log.go:172] (0xc001e24630) (0xc001e89680) Stream removed, broadcasting: 5
May 14 12:17:17.977: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
May 14 12:17:17.977: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7576 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 14 12:17:17.977: INFO: >>> kubeConfig: /root/.kube/config
I0514 12:17:18.001716       7 log.go:172] (0xc00206e370) (0xc0014e8460) Create stream
I0514 12:17:18.001731       7 log.go:172] (0xc00206e370) (0xc0014e8460) Stream added, broadcasting: 1
I0514 12:17:18.002987       7 log.go:172] (0xc00206e370) Reply frame received for 1
I0514 12:17:18.003018       7 log.go:172] (0xc00206e370) (0xc001e89900) Create stream
I0514 12:17:18.003030       7 log.go:172] (0xc00206e370) (0xc001e89900) Stream added, broadcasting: 3
I0514 12:17:18.003674       7 log.go:172] (0xc00206e370) Reply frame received for 3
I0514 12:17:18.003699       7 log.go:172] (0xc00206e370) (0xc001e899a0) Create stream
I0514 12:17:18.003709       7 log.go:172] (0xc00206e370) (0xc001e899a0) Stream added, broadcasting: 5
I0514 12:17:18.004399       7 log.go:172] (0xc00206e370) Reply frame received for 5
I0514 12:17:18.067209       7 log.go:172] (0xc00206e370) Data frame received for 5
I0514 12:17:18.067229       7 log.go:172] (0xc001e899a0) (5) Data frame handling
I0514 12:17:18.067257       7 log.go:172] (0xc00206e370) Data frame received for 3
I0514 12:17:18.067305       7 log.go:172] (0xc001e89900) (3) Data frame handling
I0514 12:17:18.067323       7 log.go:172] (0xc001e89900) (3) Data frame sent
I0514 12:17:18.067332       7 log.go:172] (0xc00206e370) Data frame received for 3
I0514 12:17:18.067339       7 log.go:172] (0xc001e89900) (3) Data frame handling
I0514 12:17:18.068305       7 log.go:172] (0xc00206e370) Data frame received for 1
I0514 12:17:18.068313       7 log.go:172] (0xc0014e8460) (1) Data frame handling
I0514 12:17:18.068330       7 log.go:172] (0xc0014e8460) (1) Data frame sent
I0514 12:17:18.068545       7 log.go:172] (0xc00206e370) (0xc0014e8460) Stream removed, broadcasting: 1
I0514 12:17:18.068572       7 log.go:172] (0xc00206e370) Go away received
I0514 12:17:18.068615       7 log.go:172] (0xc00206e370) (0xc0014e8460) Stream removed, broadcasting: 1
I0514 12:17:18.068625       7 log.go:172] (0xc00206e370) (0xc001e89900) Stream removed, broadcasting: 3
I0514 12:17:18.068633       7 log.go:172] (0xc00206e370) (0xc001e899a0) Stream removed, broadcasting: 5
May 14 12:17:18.068: INFO: Exec stderr: ""
May 14 12:17:18.068: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7576 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 14 12:17:18.068: INFO: >>> kubeConfig: /root/.kube/config
I0514 12:17:18.095060       7 log.go:172] (0xc001e249a0) (0xc001e89b80) Create stream
I0514 12:17:18.095095       7 log.go:172] (0xc001e249a0) (0xc001e89b80) Stream added, broadcasting: 1
I0514 12:17:18.096473       7 log.go:172] (0xc001e249a0) Reply frame received for 1
I0514 12:17:18.096524       7 log.go:172] (0xc001e249a0) (0xc001e89c20) Create stream
I0514 12:17:18.096545       7 log.go:172] (0xc001e249a0) (0xc001e89c20) Stream added, broadcasting: 3
I0514 12:17:18.097546       7 log.go:172] (0xc001e249a0) Reply frame received for 3
I0514 12:17:18.097567       7 log.go:172] (0xc001e249a0) (0xc0014e85a0) Create stream
I0514 12:17:18.097575       7 log.go:172] (0xc001e249a0) (0xc0014e85a0) Stream added, broadcasting: 5
I0514 12:17:18.098165       7 log.go:172] (0xc001e249a0) Reply frame received for 5
I0514 12:17:18.161762       7 log.go:172] (0xc001e249a0) Data frame received for 5
I0514 12:17:18.161777       7 log.go:172] (0xc0014e85a0) (5) Data frame handling
I0514 12:17:18.161793       7 log.go:172] (0xc001e249a0) Data frame received for 3
I0514 12:17:18.161797       7 log.go:172] (0xc001e89c20) (3) Data frame handling
I0514 12:17:18.161802       7 log.go:172] (0xc001e89c20) (3) Data frame sent
I0514 12:17:18.161807       7 log.go:172] (0xc001e249a0) Data frame received for 3
I0514 12:17:18.161813       7 log.go:172] (0xc001e89c20) (3) Data frame handling
I0514 12:17:18.162936       7 log.go:172] (0xc001e249a0) Data frame received for 1
I0514 12:17:18.162948       7 log.go:172] (0xc001e89b80) (1) Data frame handling
I0514 12:17:18.162955       7 log.go:172] (0xc001e89b80) (1) Data frame sent
I0514 12:17:18.162963       7 log.go:172] (0xc001e249a0) (0xc001e89b80) Stream removed, broadcasting: 1
I0514 12:17:18.163008       7 log.go:172] (0xc001e249a0) Go away received
I0514 12:17:18.163025       7 log.go:172] (0xc001e249a0) (0xc001e89b80) Stream removed, broadcasting: 1
I0514 12:17:18.163033       7 log.go:172] (0xc001e249a0) (0xc001e89c20) Stream removed, broadcasting: 3
I0514 12:17:18.163038       7 log.go:172] (0xc001e249a0) (0xc0014e85a0) Stream removed, broadcasting: 5
May 14 12:17:18.163: INFO: Exec stderr: ""
May 14 12:17:18.163: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7576 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 14 12:17:18.163: INFO: >>> kubeConfig: /root/.kube/config
I0514 12:17:18.185777       7 log.go:172] (0xc001e24fd0) (0xc001a82140) Create stream
I0514 12:17:18.185805       7 log.go:172] (0xc001e24fd0) (0xc001a82140) Stream added, broadcasting: 1
I0514 12:17:18.187481       7 log.go:172] (0xc001e24fd0) Reply frame received for 1
I0514 12:17:18.187505       7 log.go:172] (0xc001e24fd0) (0xc001a821e0) Create stream
I0514 12:17:18.187514       7 log.go:172] (0xc001e24fd0) (0xc001a821e0) Stream added, broadcasting: 3
I0514 12:17:18.188281       7 log.go:172] (0xc001e24fd0) Reply frame received for 3
I0514 12:17:18.188303       7 log.go:172] (0xc001e24fd0) (0xc0023e9860) Create stream
I0514 12:17:18.188311       7 log.go:172] (0xc001e24fd0) (0xc0023e9860) Stream added, broadcasting: 5
I0514 12:17:18.189038       7 log.go:172] (0xc001e24fd0) Reply frame received for 5
I0514 12:17:18.251039       7 log.go:172] (0xc001e24fd0) Data frame received for 3
I0514 12:17:18.251056       7 log.go:172] (0xc001a821e0) (3) Data frame handling
I0514 12:17:18.251068       7 log.go:172] (0xc001a821e0) (3) Data frame sent
I0514 12:17:18.251074       7 log.go:172] (0xc001e24fd0) Data frame received for 3
I0514 12:17:18.251077       7 log.go:172] (0xc001a821e0) (3) Data frame handling
I0514 12:17:18.251099       7 log.go:172] (0xc001e24fd0) Data frame received for 5
I0514 12:17:18.251106       7 log.go:172] (0xc0023e9860) (5) Data frame handling
I0514 12:17:18.251891       7 log.go:172] (0xc001e24fd0) Data frame received for 1
I0514 12:17:18.251926       7 log.go:172] (0xc001a82140) (1) Data frame handling
I0514 12:17:18.251952       7 log.go:172] (0xc001a82140) (1) Data frame sent
I0514 12:17:18.252018       7 log.go:172] (0xc001e24fd0) (0xc001a82140) Stream removed, broadcasting: 1
I0514 12:17:18.252092       7 log.go:172] (0xc001e24fd0) (0xc001a82140) Stream removed, broadcasting: 1
I0514 12:17:18.252103       7 log.go:172] (0xc001e24fd0) (0xc001a821e0) Stream removed, broadcasting: 3
I0514 12:17:18.252203       7 log.go:172] (0xc001e24fd0) Go away received
I0514 12:17:18.252241       7 log.go:172] (0xc001e24fd0) (0xc0023e9860) Stream removed, broadcasting: 5
May 14 12:17:18.252: INFO: Exec stderr: ""
May 14 12:17:18.252: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7576 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May 14 12:17:18.252: INFO: >>> kubeConfig: /root/.kube/config
I0514 12:17:18.280224       7 log.go:172] (0xc001c98f20) (0xc0023e9a40) Create stream
I0514 12:17:18.280247       7 log.go:172] (0xc001c98f20) (0xc0023e9a40) Stream added, broadcasting: 1
I0514 12:17:18.282652       7 log.go:172] (0xc001c98f20) Reply frame received for 1
I0514 12:17:18.282698       7 log.go:172] (0xc001c98f20) (0xc0023e9ae0) Create stream
I0514 12:17:18.282717       7 log.go:172] (0xc001c98f20) (0xc0023e9ae0) Stream added, broadcasting: 3
I0514 12:17:18.283676       7 log.go:172] (0xc001c98f20) Reply frame received for 3
I0514 12:17:18.283713       7 log.go:172] (0xc001c98f20) (0xc0023e9c20) Create stream
I0514 12:17:18.283728       7 log.go:172] (0xc001c98f20) (0xc0023e9c20) Stream added, broadcasting: 5
I0514 12:17:18.284601       7 log.go:172] (0xc001c98f20) Reply frame received for 5
I0514 12:17:18.335007       7 log.go:172] (0xc001c98f20) Data frame received for 5
I0514 12:17:18.335042       7 log.go:172] (0xc0023e9c20) (5) Data frame handling
I0514 12:17:18.335067       7 log.go:172] (0xc001c98f20) Data frame received for 3
I0514 12:17:18.335086       7 log.go:172] (0xc0023e9ae0) (3) Data frame handling
I0514 12:17:18.335101       7 log.go:172] (0xc0023e9ae0) (3) Data frame sent
I0514 12:17:18.335130       7 log.go:172] (0xc001c98f20) Data frame received for 3
I0514 12:17:18.335149       7 log.go:172] (0xc0023e9ae0) (3) Data frame handling
I0514 12:17:18.336321       7 log.go:172] (0xc001c98f20) Data frame received for 1
I0514 12:17:18.336346       7 log.go:172] (0xc0023e9a40) (1) Data frame handling
I0514 12:17:18.336360       7 log.go:172] (0xc0023e9a40) (1) Data frame sent
I0514 12:17:18.336376       7 log.go:172] (0xc001c98f20) (0xc0023e9a40) Stream removed, broadcasting: 1
I0514 12:17:18.336418       7 log.go:172] (0xc001c98f20) Go away received
I0514 12:17:18.336459       7 log.go:172] (0xc001c98f20) (0xc0023e9a40) Stream removed, broadcasting: 1
I0514 12:17:18.336474       7 log.go:172] (0xc001c98f20) (0xc0023e9ae0) Stream removed, broadcasting: 3
I0514 12:17:18.336487       7 log.go:172] (0xc001c98f20) (0xc0023e9c20) Stream removed, broadcasting: 5
May 14 12:17:18.336: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:17:18.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-7576" for this suite.

• [SLOW TEST:71.045 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":246,"skipped":4236,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:17:18.344: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 14 12:17:18.438: INFO: Waiting up to 5m0s for pod "downwardapi-volume-15170785-b127-43e8-a9aa-f087f6b98b96" in namespace "downward-api-5127" to be "Succeeded or Failed"
May 14 12:17:18.442: INFO: Pod "downwardapi-volume-15170785-b127-43e8-a9aa-f087f6b98b96": Phase="Pending", Reason="", readiness=false. Elapsed: 3.052147ms
May 14 12:17:20.447: INFO: Pod "downwardapi-volume-15170785-b127-43e8-a9aa-f087f6b98b96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00854532s
May 14 12:17:22.451: INFO: Pod "downwardapi-volume-15170785-b127-43e8-a9aa-f087f6b98b96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012256119s
May 14 12:17:24.616: INFO: Pod "downwardapi-volume-15170785-b127-43e8-a9aa-f087f6b98b96": Phase="Pending", Reason="", readiness=false. Elapsed: 6.177360785s
May 14 12:17:26.619: INFO: Pod "downwardapi-volume-15170785-b127-43e8-a9aa-f087f6b98b96": Phase="Running", Reason="", readiness=true. Elapsed: 8.180834658s
May 14 12:17:28.622: INFO: Pod "downwardapi-volume-15170785-b127-43e8-a9aa-f087f6b98b96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.183944642s
STEP: Saw pod success
May 14 12:17:28.623: INFO: Pod "downwardapi-volume-15170785-b127-43e8-a9aa-f087f6b98b96" satisfied condition "Succeeded or Failed"
May 14 12:17:28.624: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-15170785-b127-43e8-a9aa-f087f6b98b96 container client-container: 
STEP: delete the pod
May 14 12:17:28.688: INFO: Waiting for pod downwardapi-volume-15170785-b127-43e8-a9aa-f087f6b98b96 to disappear
May 14 12:17:28.723: INFO: Pod downwardapi-volume-15170785-b127-43e8-a9aa-f087f6b98b96 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:17:28.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5127" for this suite.

• [SLOW TEST:10.385 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":247,"skipped":4268,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:17:28.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-8e28b210-3216-4e10-9f83-558f52604317
STEP: Creating a pod to test consume secrets
May 14 12:17:29.019: INFO: Waiting up to 5m0s for pod "pod-secrets-c51517c3-3759-4721-9054-3767c27b70fb" in namespace "secrets-1950" to be "Succeeded or Failed"
May 14 12:17:29.643: INFO: Pod "pod-secrets-c51517c3-3759-4721-9054-3767c27b70fb": Phase="Pending", Reason="", readiness=false. Elapsed: 624.1998ms
May 14 12:17:31.646: INFO: Pod "pod-secrets-c51517c3-3759-4721-9054-3767c27b70fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.627254034s
May 14 12:17:34.981: INFO: Pod "pod-secrets-c51517c3-3759-4721-9054-3767c27b70fb": Phase="Pending", Reason="", readiness=false. Elapsed: 5.962377225s
May 14 12:17:36.984: INFO: Pod "pod-secrets-c51517c3-3759-4721-9054-3767c27b70fb": Phase="Running", Reason="", readiness=true. Elapsed: 7.964894967s
May 14 12:17:38.988: INFO: Pod "pod-secrets-c51517c3-3759-4721-9054-3767c27b70fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.968767574s
STEP: Saw pod success
May 14 12:17:38.988: INFO: Pod "pod-secrets-c51517c3-3759-4721-9054-3767c27b70fb" satisfied condition "Succeeded or Failed"
May 14 12:17:38.990: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-c51517c3-3759-4721-9054-3767c27b70fb container secret-volume-test: 
STEP: delete the pod
May 14 12:17:39.048: INFO: Waiting for pod pod-secrets-c51517c3-3759-4721-9054-3767c27b70fb to disappear
May 14 12:17:39.052: INFO: Pod pod-secrets-c51517c3-3759-4721-9054-3767c27b70fb no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:17:39.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1950" for this suite.

• [SLOW TEST:10.351 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":248,"skipped":4270,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:17:39.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
May 14 12:17:39.246: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-447 /api/v1/namespaces/watch-447/configmaps/e2e-watch-test-configmap-a 80cac6fd-a16f-4a6b-b886-83b1da5a2a2b 4291328 0 2020-05-14 12:17:39 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-14 12:17:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
May 14 12:17:39.246: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-447 /api/v1/namespaces/watch-447/configmaps/e2e-watch-test-configmap-a 80cac6fd-a16f-4a6b-b886-83b1da5a2a2b 4291328 0 2020-05-14 12:17:39 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-14 12:17:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
May 14 12:17:49.253: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-447 /api/v1/namespaces/watch-447/configmaps/e2e-watch-test-configmap-a 80cac6fd-a16f-4a6b-b886-83b1da5a2a2b 4291367 0 2020-05-14 12:17:39 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-14 12:17:49 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
May 14 12:17:49.254: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-447 /api/v1/namespaces/watch-447/configmaps/e2e-watch-test-configmap-a 80cac6fd-a16f-4a6b-b886-83b1da5a2a2b 4291367 0 2020-05-14 12:17:39 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-14 12:17:49 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
May 14 12:17:59.355: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-447 /api/v1/namespaces/watch-447/configmaps/e2e-watch-test-configmap-a 80cac6fd-a16f-4a6b-b886-83b1da5a2a2b 4291397 0 2020-05-14 12:17:39 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-14 12:17:59 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May 14 12:17:59.356: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-447 /api/v1/namespaces/watch-447/configmaps/e2e-watch-test-configmap-a 80cac6fd-a16f-4a6b-b886-83b1da5a2a2b 4291397 0 2020-05-14 12:17:39 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-14 12:17:59 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
May 14 12:18:12.375: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-447 /api/v1/namespaces/watch-447/configmaps/e2e-watch-test-configmap-a 80cac6fd-a16f-4a6b-b886-83b1da5a2a2b 4291418 0 2020-05-14 12:17:39 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-14 12:17:59 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May 14 12:18:12.375: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-447 /api/v1/namespaces/watch-447/configmaps/e2e-watch-test-configmap-a 80cac6fd-a16f-4a6b-b886-83b1da5a2a2b 4291418 0 2020-05-14 12:17:39 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-14 12:17:59 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
May 14 12:18:23.010: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-447 /api/v1/namespaces/watch-447/configmaps/e2e-watch-test-configmap-b 80062c4a-dd8a-410b-8237-99ede178f87f 4291435 0 2020-05-14 12:18:22 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-05-14 12:18:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
May 14 12:18:23.010: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-447 /api/v1/namespaces/watch-447/configmaps/e2e-watch-test-configmap-b 80062c4a-dd8a-410b-8237-99ede178f87f 4291435 0 2020-05-14 12:18:22 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-05-14 12:18:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
May 14 12:18:33.126: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-447 /api/v1/namespaces/watch-447/configmaps/e2e-watch-test-configmap-b 80062c4a-dd8a-410b-8237-99ede178f87f 4291462 0 2020-05-14 12:18:22 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-05-14 12:18:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
May 14 12:18:33.126: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-447 /api/v1/namespaces/watch-447/configmaps/e2e-watch-test-configmap-b 80062c4a-dd8a-410b-8237-99ede178f87f 4291462 0 2020-05-14 12:18:22 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-05-14 12:18:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:18:43.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-447" for this suite.

• [SLOW TEST:64.108 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":249,"skipped":4312,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:18:43.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-d5a78e54-4297-44be-b6a5-ae9cfac5b762
STEP: Creating a pod to test consume configMaps
May 14 12:18:45.009: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-30c64c00-2cae-4b5f-be07-ba72823b232c" in namespace "projected-235" to be "Succeeded or Failed"
May 14 12:18:47.005: INFO: Pod "pod-projected-configmaps-30c64c00-2cae-4b5f-be07-ba72823b232c": Phase="Pending", Reason="", readiness=false. Elapsed: 1.995411121s
May 14 12:18:50.231: INFO: Pod "pod-projected-configmaps-30c64c00-2cae-4b5f-be07-ba72823b232c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.221459949s
May 14 12:18:52.575: INFO: Pod "pod-projected-configmaps-30c64c00-2cae-4b5f-be07-ba72823b232c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.565645244s
May 14 12:18:54.579: INFO: Pod "pod-projected-configmaps-30c64c00-2cae-4b5f-be07-ba72823b232c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.569734083s
May 14 12:18:56.583: INFO: Pod "pod-projected-configmaps-30c64c00-2cae-4b5f-be07-ba72823b232c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.574059653s
May 14 12:19:01.623: INFO: Pod "pod-projected-configmaps-30c64c00-2cae-4b5f-be07-ba72823b232c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.613341819s
May 14 12:19:03.673: INFO: Pod "pod-projected-configmaps-30c64c00-2cae-4b5f-be07-ba72823b232c": Phase="Pending", Reason="", readiness=false. Elapsed: 18.663977681s
May 14 12:19:05.982: INFO: Pod "pod-projected-configmaps-30c64c00-2cae-4b5f-be07-ba72823b232c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.972460892s
May 14 12:19:08.191: INFO: Pod "pod-projected-configmaps-30c64c00-2cae-4b5f-be07-ba72823b232c": Phase="Pending", Reason="", readiness=false. Elapsed: 23.181988495s
May 14 12:19:10.195: INFO: Pod "pod-projected-configmaps-30c64c00-2cae-4b5f-be07-ba72823b232c": Phase="Pending", Reason="", readiness=false. Elapsed: 25.185245154s
May 14 12:19:12.434: INFO: Pod "pod-projected-configmaps-30c64c00-2cae-4b5f-be07-ba72823b232c": Phase="Pending", Reason="", readiness=false. Elapsed: 27.424277068s
May 14 12:19:14.733: INFO: Pod "pod-projected-configmaps-30c64c00-2cae-4b5f-be07-ba72823b232c": Phase="Pending", Reason="", readiness=false. Elapsed: 29.723545811s
May 14 12:19:17.013: INFO: Pod "pod-projected-configmaps-30c64c00-2cae-4b5f-be07-ba72823b232c": Phase="Pending", Reason="", readiness=false. Elapsed: 32.003462081s
May 14 12:19:19.016: INFO: Pod "pod-projected-configmaps-30c64c00-2cae-4b5f-be07-ba72823b232c": Phase="Pending", Reason="", readiness=false. Elapsed: 34.007093293s
May 14 12:19:27.624: INFO: Pod "pod-projected-configmaps-30c64c00-2cae-4b5f-be07-ba72823b232c": Phase="Pending", Reason="", readiness=false. Elapsed: 42.614216777s
May 14 12:19:30.067: INFO: Pod "pod-projected-configmaps-30c64c00-2cae-4b5f-be07-ba72823b232c": Phase="Pending", Reason="", readiness=false. Elapsed: 45.057376066s
May 14 12:19:32.372: INFO: Pod "pod-projected-configmaps-30c64c00-2cae-4b5f-be07-ba72823b232c": Phase="Pending", Reason="", readiness=false. Elapsed: 47.362451547s
May 14 12:19:34.376: INFO: Pod "pod-projected-configmaps-30c64c00-2cae-4b5f-be07-ba72823b232c": Phase="Pending", Reason="", readiness=false. Elapsed: 49.367070727s
May 14 12:19:36.521: INFO: Pod "pod-projected-configmaps-30c64c00-2cae-4b5f-be07-ba72823b232c": Phase="Pending", Reason="", readiness=false. Elapsed: 51.511835059s
May 14 12:19:38.859: INFO: Pod "pod-projected-configmaps-30c64c00-2cae-4b5f-be07-ba72823b232c": Phase="Pending", Reason="", readiness=false. Elapsed: 53.849226239s
May 14 12:19:42.135: INFO: Pod "pod-projected-configmaps-30c64c00-2cae-4b5f-be07-ba72823b232c": Phase="Pending", Reason="", readiness=false. Elapsed: 57.125637539s
May 14 12:19:46.006: INFO: Pod "pod-projected-configmaps-30c64c00-2cae-4b5f-be07-ba72823b232c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.996466305s
May 14 12:19:48.010: INFO: Pod "pod-projected-configmaps-30c64c00-2cae-4b5f-be07-ba72823b232c": Phase="Running", Reason="", readiness=true. Elapsed: 1m3.000184193s
May 14 12:19:50.013: INFO: Pod "pod-projected-configmaps-30c64c00-2cae-4b5f-be07-ba72823b232c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m5.003127498s
STEP: Saw pod success
May 14 12:19:50.013: INFO: Pod "pod-projected-configmaps-30c64c00-2cae-4b5f-be07-ba72823b232c" satisfied condition "Succeeded or Failed"
May 14 12:19:50.015: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-30c64c00-2cae-4b5f-be07-ba72823b232c container projected-configmap-volume-test: 
STEP: delete the pod
May 14 12:19:50.326: INFO: Waiting for pod pod-projected-configmaps-30c64c00-2cae-4b5f-be07-ba72823b232c to disappear
May 14 12:19:50.338: INFO: Pod pod-projected-configmaps-30c64c00-2cae-4b5f-be07-ba72823b232c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:19:50.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-235" for this suite.

• [SLOW TEST:67.154 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":250,"skipped":4316,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:19:50.344: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-bc6dd4da-2ed0-44bd-931a-9696331104ea
STEP: Creating a pod to test consume configMaps
May 14 12:19:50.574: INFO: Waiting up to 5m0s for pod "pod-configmaps-891bb8b5-915d-4154-bf38-ca4c82e56409" in namespace "configmap-1061" to be "Succeeded or Failed"
May 14 12:19:50.596: INFO: Pod "pod-configmaps-891bb8b5-915d-4154-bf38-ca4c82e56409": Phase="Pending", Reason="", readiness=false. Elapsed: 22.143528ms
May 14 12:19:52.599: INFO: Pod "pod-configmaps-891bb8b5-915d-4154-bf38-ca4c82e56409": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02536101s
May 14 12:19:54.602: INFO: Pod "pod-configmaps-891bb8b5-915d-4154-bf38-ca4c82e56409": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028354837s
May 14 12:19:56.778: INFO: Pod "pod-configmaps-891bb8b5-915d-4154-bf38-ca4c82e56409": Phase="Running", Reason="", readiness=true. Elapsed: 6.204822953s
May 14 12:19:58.782: INFO: Pod "pod-configmaps-891bb8b5-915d-4154-bf38-ca4c82e56409": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.208029875s
STEP: Saw pod success
May 14 12:19:58.782: INFO: Pod "pod-configmaps-891bb8b5-915d-4154-bf38-ca4c82e56409" satisfied condition "Succeeded or Failed"
May 14 12:19:58.784: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-891bb8b5-915d-4154-bf38-ca4c82e56409 container configmap-volume-test: 
STEP: delete the pod
May 14 12:19:58.812: INFO: Waiting for pod pod-configmaps-891bb8b5-915d-4154-bf38-ca4c82e56409 to disappear
May 14 12:19:58.817: INFO: Pod pod-configmaps-891bb8b5-915d-4154-bf38-ca4c82e56409 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:19:58.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1061" for this suite.

• [SLOW TEST:8.480 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":251,"skipped":4321,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] LimitRange
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:19:58.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename limitrange
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a LimitRange
STEP: Setting up watch
STEP: Submitting a LimitRange
May 14 12:19:58.964: INFO: observed the limitRanges list
STEP: Verifying LimitRange creation was observed
STEP: Fetching the LimitRange to ensure it has proper values
May 14 12:19:58.979: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
May 14 12:19:58.979: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with no resource requirements
STEP: Ensuring Pod has resource requirements applied from LimitRange
May 14 12:19:59.039: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
May 14 12:19:59.039: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with partial resource requirements
STEP: Ensuring Pod has merged resource requirements applied from LimitRange
May 14 12:19:59.081: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}]
May 14 12:19:59.081: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Failing to create a Pod with less than min resources
STEP: Failing to create a Pod with more than max resources
STEP: Updating a LimitRange
STEP: Verifying LimitRange updating is effective
STEP: Creating a Pod with less than former min resources
STEP: Failing to create a Pod with more than max resources
STEP: Deleting a LimitRange
STEP: Verifying the LimitRange was deleted
May 14 12:20:06.522: INFO: limitRange is already deleted
STEP: Creating a Pod with more than former max resources
[AfterEach] [sig-scheduling] LimitRange
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:20:06.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "limitrange-5415" for this suite.

• [SLOW TEST:7.758 seconds]
[sig-scheduling] LimitRange
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":252,"skipped":4335,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:20:06.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-b02c42cf-3b80-4b89-b903-be1032c7e627
STEP: Creating a pod to test consume configMaps
May 14 12:20:06.813: INFO: Waiting up to 5m0s for pod "pod-configmaps-7277fb31-1e62-4567-9a49-28be90b1724c" in namespace "configmap-4302" to be "Succeeded or Failed"
May 14 12:20:06.817: INFO: Pod "pod-configmaps-7277fb31-1e62-4567-9a49-28be90b1724c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.954159ms
May 14 12:20:09.156: INFO: Pod "pod-configmaps-7277fb31-1e62-4567-9a49-28be90b1724c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.342630762s
May 14 12:20:11.222: INFO: Pod "pod-configmaps-7277fb31-1e62-4567-9a49-28be90b1724c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.40834724s
May 14 12:20:13.270: INFO: Pod "pod-configmaps-7277fb31-1e62-4567-9a49-28be90b1724c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.456659355s
May 14 12:20:15.710: INFO: Pod "pod-configmaps-7277fb31-1e62-4567-9a49-28be90b1724c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.896219576s
May 14 12:20:17.713: INFO: Pod "pod-configmaps-7277fb31-1e62-4567-9a49-28be90b1724c": Phase="Running", Reason="", readiness=true. Elapsed: 10.899591174s
May 14 12:20:19.851: INFO: Pod "pod-configmaps-7277fb31-1e62-4567-9a49-28be90b1724c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.037780868s
STEP: Saw pod success
May 14 12:20:19.851: INFO: Pod "pod-configmaps-7277fb31-1e62-4567-9a49-28be90b1724c" satisfied condition "Succeeded or Failed"
May 14 12:20:19.853: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-7277fb31-1e62-4567-9a49-28be90b1724c container configmap-volume-test: 
STEP: delete the pod
May 14 12:20:21.776: INFO: Waiting for pod pod-configmaps-7277fb31-1e62-4567-9a49-28be90b1724c to disappear
May 14 12:20:21.880: INFO: Pod pod-configmaps-7277fb31-1e62-4567-9a49-28be90b1724c no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:20:21.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4302" for this suite.

• [SLOW TEST:15.379 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":253,"skipped":4335,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:20:21.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-1167
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
May 14 12:20:25.784: INFO: Found 0 stateful pods, waiting for 3
May 14 12:20:35.788: INFO: Found 2 stateful pods, waiting for 3
May 14 12:20:45.789: INFO: Found 2 stateful pods, waiting for 3
May 14 12:20:55.929: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
May 14 12:20:55.929: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
May 14 12:20:55.929: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
May 14 12:21:05.788: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
May 14 12:21:05.788: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
May 14 12:21:05.788: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
May 14 12:21:15.789: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
May 14 12:21:15.789: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
May 14 12:21:15.789: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
May 14 12:21:15.815: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
May 14 12:21:25.875: INFO: Updating stateful set ss2
May 14 12:21:25.965: INFO: Waiting for Pod statefulset-1167/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
May 14 12:21:39.798: INFO: Found 2 stateful pods, waiting for 3
May 14 12:21:50.067: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
May 14 12:21:50.067: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
May 14 12:21:50.067: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
May 14 12:21:59.802: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
May 14 12:21:59.802: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
May 14 12:21:59.802: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
May 14 12:21:59.823: INFO: Updating stateful set ss2
May 14 12:22:00.432: INFO: Waiting for Pod statefulset-1167/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
May 14 12:22:10.440: INFO: Waiting for Pod statefulset-1167/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
May 14 12:22:20.456: INFO: Updating stateful set ss2
May 14 12:22:21.612: INFO: Waiting for StatefulSet statefulset-1167/ss2 to complete update
May 14 12:22:21.612: INFO: Waiting for Pod statefulset-1167/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
May 14 12:22:32.421: INFO: Waiting for StatefulSet statefulset-1167/ss2 to complete update
May 14 12:22:32.421: INFO: Waiting for Pod statefulset-1167/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
May 14 12:22:44.716: INFO: Waiting for StatefulSet statefulset-1167/ss2 to complete update
May 14 12:22:44.716: INFO: Waiting for Pod statefulset-1167/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
May 14 12:22:54.680: INFO: Waiting for StatefulSet statefulset-1167/ss2 to complete update
May 14 12:23:02.866: INFO: Waiting for StatefulSet statefulset-1167/ss2 to complete update
May 14 12:23:11.993: INFO: Waiting for StatefulSet statefulset-1167/ss2 to complete update
May 14 12:23:22.112: INFO: Waiting for StatefulSet statefulset-1167/ss2 to complete update
May 14 12:23:31.620: INFO: Waiting for StatefulSet statefulset-1167/ss2 to complete update
May 14 12:23:41.752: INFO: Waiting for StatefulSet statefulset-1167/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
May 14 12:23:51.620: INFO: Deleting all statefulset in ns statefulset-1167
May 14 12:23:51.622: INFO: Scaling statefulset ss2 to 0
May 14 12:24:11.660: INFO: Waiting for statefulset status.replicas updated to 0
May 14 12:24:11.662: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:24:11.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1167" for this suite.

• [SLOW TEST:229.790 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":254,"skipped":4379,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:24:11.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a replication controller
May 14 12:24:11.839: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5850'
May 14 12:24:30.143: INFO: stderr: ""
May 14 12:24:30.143: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
May 14 12:24:30.143: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5850'
May 14 12:24:31.317: INFO: stderr: ""
May 14 12:24:31.317: INFO: stdout: "update-demo-nautilus-47svs update-demo-nautilus-k6bv7 "
May 14 12:24:31.317: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47svs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5850'
May 14 12:24:31.717: INFO: stderr: ""
May 14 12:24:31.717: INFO: stdout: ""
May 14 12:24:31.717: INFO: update-demo-nautilus-47svs is created but not running
May 14 12:24:36.717: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5850'
May 14 12:24:38.369: INFO: stderr: ""
May 14 12:24:38.369: INFO: stdout: "update-demo-nautilus-47svs update-demo-nautilus-k6bv7 "
May 14 12:24:38.369: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47svs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5850'
May 14 12:24:39.597: INFO: stderr: ""
May 14 12:24:39.598: INFO: stdout: ""
May 14 12:24:39.598: INFO: update-demo-nautilus-47svs is created but not running
May 14 12:24:44.598: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5850'
May 14 12:24:44.866: INFO: stderr: ""
May 14 12:24:44.866: INFO: stdout: "update-demo-nautilus-47svs update-demo-nautilus-k6bv7 "
May 14 12:24:44.866: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47svs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5850'
May 14 12:24:45.081: INFO: stderr: ""
May 14 12:24:45.081: INFO: stdout: ""
May 14 12:24:45.081: INFO: update-demo-nautilus-47svs is created but not running
May 14 12:24:50.081: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5850'
May 14 12:24:51.696: INFO: stderr: ""
May 14 12:24:51.696: INFO: stdout: "update-demo-nautilus-47svs update-demo-nautilus-k6bv7 "
May 14 12:24:51.696: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47svs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5850'
May 14 12:24:52.240: INFO: stderr: ""
May 14 12:24:52.240: INFO: stdout: ""
May 14 12:24:52.240: INFO: update-demo-nautilus-47svs is created but not running
May 14 12:24:57.241: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5850'
May 14 12:24:57.738: INFO: stderr: ""
May 14 12:24:57.738: INFO: stdout: "update-demo-nautilus-47svs update-demo-nautilus-k6bv7 "
May 14 12:24:57.738: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47svs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5850'
May 14 12:24:58.727: INFO: stderr: ""
May 14 12:24:58.727: INFO: stdout: ""
May 14 12:24:58.727: INFO: update-demo-nautilus-47svs is created but not running
May 14 12:25:03.727: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5850'
May 14 12:25:04.271: INFO: stderr: ""
May 14 12:25:04.272: INFO: stdout: "update-demo-nautilus-47svs update-demo-nautilus-k6bv7 "
May 14 12:25:04.272: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47svs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5850'
May 14 12:25:04.364: INFO: stderr: ""
May 14 12:25:04.364: INFO: stdout: ""
May 14 12:25:04.364: INFO: update-demo-nautilus-47svs is created but not running
May 14 12:25:09.364: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5850'
May 14 12:25:10.711: INFO: stderr: ""
May 14 12:25:10.711: INFO: stdout: "update-demo-nautilus-47svs update-demo-nautilus-k6bv7 "
May 14 12:25:10.711: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47svs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5850'
May 14 12:25:12.158: INFO: stderr: ""
May 14 12:25:12.158: INFO: stdout: ""
May 14 12:25:12.158: INFO: update-demo-nautilus-47svs is created but not running
May 14 12:25:17.158: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5850'
May 14 12:25:17.498: INFO: stderr: ""
May 14 12:25:17.498: INFO: stdout: "update-demo-nautilus-47svs update-demo-nautilus-k6bv7 "
May 14 12:25:17.498: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47svs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5850'
May 14 12:25:19.339: INFO: stderr: ""
May 14 12:25:19.339: INFO: stdout: ""
May 14 12:25:19.339: INFO: update-demo-nautilus-47svs is created but not running
May 14 12:25:24.339: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5850'
May 14 12:25:25.868: INFO: stderr: ""
May 14 12:25:25.868: INFO: stdout: "update-demo-nautilus-47svs update-demo-nautilus-k6bv7 "
May 14 12:25:25.868: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47svs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5850'
May 14 12:25:27.217: INFO: stderr: ""
May 14 12:25:27.217: INFO: stdout: ""
May 14 12:25:27.217: INFO: update-demo-nautilus-47svs is created but not running
May 14 12:25:32.218: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5850'
May 14 12:25:32.869: INFO: stderr: ""
May 14 12:25:32.869: INFO: stdout: "update-demo-nautilus-47svs update-demo-nautilus-k6bv7 "
May 14 12:25:32.869: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47svs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5850'
May 14 12:25:33.098: INFO: stderr: ""
May 14 12:25:33.098: INFO: stdout: ""
May 14 12:25:33.098: INFO: update-demo-nautilus-47svs is created but not running
May 14 12:25:38.098: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5850'
May 14 12:25:38.451: INFO: stderr: ""
May 14 12:25:38.451: INFO: stdout: "update-demo-nautilus-47svs update-demo-nautilus-k6bv7 "
May 14 12:25:38.452: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47svs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5850'
May 14 12:25:38.807: INFO: stderr: ""
May 14 12:25:38.807: INFO: stdout: ""
May 14 12:25:38.807: INFO: update-demo-nautilus-47svs is created but not running
May 14 12:25:43.807: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5850'
May 14 12:25:44.164: INFO: stderr: ""
May 14 12:25:44.164: INFO: stdout: "update-demo-nautilus-47svs update-demo-nautilus-k6bv7 "
May 14 12:25:44.164: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47svs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5850'
May 14 12:25:44.494: INFO: stderr: ""
May 14 12:25:44.494: INFO: stdout: ""
May 14 12:25:44.494: INFO: update-demo-nautilus-47svs is created but not running
May 14 12:25:49.494: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5850'
May 14 12:25:51.163: INFO: stderr: ""
May 14 12:25:51.163: INFO: stdout: "update-demo-nautilus-47svs update-demo-nautilus-k6bv7 "
May 14 12:25:51.163: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47svs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5850'
May 14 12:25:51.533: INFO: stderr: ""
May 14 12:25:51.533: INFO: stdout: ""
May 14 12:25:51.533: INFO: update-demo-nautilus-47svs is created but not running
May 14 12:25:56.533: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5850'
May 14 12:25:56.659: INFO: stderr: ""
May 14 12:25:56.659: INFO: stdout: "update-demo-nautilus-47svs update-demo-nautilus-k6bv7 "
May 14 12:25:56.659: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47svs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5850'
May 14 12:25:56.749: INFO: stderr: ""
May 14 12:25:56.749: INFO: stdout: "true"
May 14 12:25:56.749: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47svs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5850'
May 14 12:25:56.834: INFO: stderr: ""
May 14 12:25:56.834: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May 14 12:25:56.834: INFO: validating pod update-demo-nautilus-47svs
May 14 12:25:56.860: INFO: got data: {
  "image": "nautilus.jpg"
}

May 14 12:25:56.860: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May 14 12:25:56.860: INFO: update-demo-nautilus-47svs is verified up and running
May 14 12:25:56.860: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k6bv7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5850'
May 14 12:25:56.947: INFO: stderr: ""
May 14 12:25:56.947: INFO: stdout: "true"
May 14 12:25:56.947: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k6bv7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5850'
May 14 12:25:57.033: INFO: stderr: ""
May 14 12:25:57.033: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May 14 12:25:57.033: INFO: validating pod update-demo-nautilus-k6bv7
May 14 12:25:57.036: INFO: got data: {
  "image": "nautilus.jpg"
}

May 14 12:25:57.036: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May 14 12:25:57.036: INFO: update-demo-nautilus-k6bv7 is verified up and running
STEP: scaling down the replication controller
May 14 12:25:57.038: INFO: scanned /root for discovery docs: 
May 14 12:25:57.038: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-5850'
May 14 12:25:58.172: INFO: stderr: ""
May 14 12:25:58.172: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
May 14 12:25:58.172: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5850'
May 14 12:25:58.258: INFO: stderr: ""
May 14 12:25:58.258: INFO: stdout: "update-demo-nautilus-47svs update-demo-nautilus-k6bv7 "
STEP: Replicas for name=update-demo: expected=1 actual=2
May 14 12:26:03.258: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5850'
May 14 12:26:03.350: INFO: stderr: ""
May 14 12:26:03.350: INFO: stdout: "update-demo-nautilus-47svs "
May 14 12:26:03.350: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47svs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5850'
May 14 12:26:03.436: INFO: stderr: ""
May 14 12:26:03.436: INFO: stdout: "true"
May 14 12:26:03.436: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47svs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5850'
May 14 12:26:03.531: INFO: stderr: ""
May 14 12:26:03.531: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May 14 12:26:03.531: INFO: validating pod update-demo-nautilus-47svs
May 14 12:26:03.534: INFO: got data: {
  "image": "nautilus.jpg"
}

May 14 12:26:03.534: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May 14 12:26:03.534: INFO: update-demo-nautilus-47svs is verified up and running
STEP: scaling up the replication controller
May 14 12:26:03.536: INFO: scanned /root for discovery docs: 
May 14 12:26:03.536: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-5850'
May 14 12:26:04.643: INFO: stderr: ""
May 14 12:26:04.644: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
May 14 12:26:04.644: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5850'
May 14 12:26:04.744: INFO: stderr: ""
May 14 12:26:04.744: INFO: stdout: "update-demo-nautilus-47svs update-demo-nautilus-l7xws "
May 14 12:26:04.744: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47svs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5850'
May 14 12:26:04.832: INFO: stderr: ""
May 14 12:26:04.832: INFO: stdout: "true"
May 14 12:26:04.832: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47svs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5850'
May 14 12:26:04.915: INFO: stderr: ""
May 14 12:26:04.915: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May 14 12:26:04.915: INFO: validating pod update-demo-nautilus-47svs
May 14 12:26:04.917: INFO: got data: {
  "image": "nautilus.jpg"
}

May 14 12:26:04.917: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May 14 12:26:04.917: INFO: update-demo-nautilus-47svs is verified up and running
May 14 12:26:04.918: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l7xws -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5850'
May 14 12:26:04.999: INFO: stderr: ""
May 14 12:26:04.999: INFO: stdout: ""
May 14 12:26:04.999: INFO: update-demo-nautilus-l7xws is created but not running
May 14 12:26:10.000: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5850'
May 14 12:26:10.095: INFO: stderr: ""
May 14 12:26:10.095: INFO: stdout: "update-demo-nautilus-47svs update-demo-nautilus-l7xws "
May 14 12:26:10.095: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47svs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5850'
May 14 12:26:10.191: INFO: stderr: ""
May 14 12:26:10.191: INFO: stdout: "true"
May 14 12:26:10.191: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47svs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5850'
May 14 12:26:10.300: INFO: stderr: ""
May 14 12:26:10.300: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May 14 12:26:10.300: INFO: validating pod update-demo-nautilus-47svs
May 14 12:26:10.303: INFO: got data: {
  "image": "nautilus.jpg"
}

May 14 12:26:10.303: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May 14 12:26:10.303: INFO: update-demo-nautilus-47svs is verified up and running
May 14 12:26:10.303: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l7xws -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5850'
May 14 12:26:10.403: INFO: stderr: ""
May 14 12:26:10.403: INFO: stdout: "true"
May 14 12:26:10.403: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l7xws -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5850'
May 14 12:26:10.496: INFO: stderr: ""
May 14 12:26:10.496: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May 14 12:26:10.496: INFO: validating pod update-demo-nautilus-l7xws
May 14 12:26:10.499: INFO: got data: {
  "image": "nautilus.jpg"
}

May 14 12:26:10.499: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May 14 12:26:10.499: INFO: update-demo-nautilus-l7xws is verified up and running
STEP: using delete to clean up resources
May 14 12:26:10.499: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5850'
May 14 12:26:10.604: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 14 12:26:10.604: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
May 14 12:26:10.604: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5850'
May 14 12:26:10.714: INFO: stderr: "No resources found in kubectl-5850 namespace.\n"
May 14 12:26:10.714: INFO: stdout: ""
May 14 12:26:10.714: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5850 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May 14 12:26:10.808: INFO: stderr: ""
May 14 12:26:10.808: INFO: stdout: "update-demo-nautilus-47svs\nupdate-demo-nautilus-l7xws\n"
May 14 12:26:11.308: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5850'
May 14 12:26:13.865: INFO: stderr: "No resources found in kubectl-5850 namespace.\n"
May 14 12:26:13.865: INFO: stdout: ""
May 14 12:26:13.865: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5850 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May 14 12:26:14.058: INFO: stderr: ""
May 14 12:26:14.058: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:26:14.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5850" for this suite.

• [SLOW TEST:122.313 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":275,"completed":255,"skipped":4419,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:26:14.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
May 14 12:26:27.700: INFO: Successfully updated pod "annotationupdate169ad102-76e2-420e-8c71-cf49196ebfea"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:26:31.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7567" for this suite.

• [SLOW TEST:17.807 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":256,"skipped":4430,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:26:31.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 14 12:26:34.911: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6" in namespace "downward-api-8656" to be "Succeeded or Failed"
May 14 12:26:35.811: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 900.900754ms
May 14 12:26:38.758: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.847008426s
May 14 12:26:40.885: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.974470996s
May 14 12:26:44.755: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.844229439s
May 14 12:26:47.869: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.958597132s
May 14 12:26:54.088: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 19.177864053s
May 14 12:26:56.102: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 21.191196402s
May 14 12:26:58.550: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 23.639196471s
May 14 12:27:00.553: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 25.641968839s
May 14 12:27:02.559: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 27.648724542s
May 14 12:27:06.189: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 31.278885097s
May 14 12:27:08.662: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 33.751410464s
May 14 12:27:10.962: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 36.051374736s
May 14 12:27:13.069: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 38.158454071s
May 14 12:27:16.429: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 41.518212636s
May 14 12:27:18.842: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 43.931303534s
May 14 12:27:20.845: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 45.934739746s
May 14 12:27:22.848: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 47.937722987s
May 14 12:27:25.244: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 50.333683923s
May 14 12:27:27.602: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 52.69166113s
May 14 12:27:30.544: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 55.633516259s
May 14 12:27:34.177: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 59.266260657s
May 14 12:27:36.180: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m1.269511876s
May 14 12:27:38.943: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.032606992s
May 14 12:27:41.812: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.901029614s
May 14 12:27:44.033: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m9.122830409s
May 14 12:27:46.621: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m11.710454255s
May 14 12:27:48.748: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m13.837347038s
May 14 12:27:50.854: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m15.943055522s
May 14 12:27:53.122: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.211779579s
May 14 12:27:55.698: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.787680075s
May 14 12:27:59.758: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.847487938s
May 14 12:28:01.762: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.850946835s
May 14 12:28:03.959: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m29.048730367s
May 14 12:28:06.422: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m31.511661562s
May 14 12:28:08.425: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m33.514745633s
May 14 12:28:10.638: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m35.727718576s
May 14 12:28:13.426: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.51539194s
May 14 12:28:16.572: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m41.66109062s
May 14 12:28:18.575: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m43.664904519s
May 14 12:28:20.598: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m45.687218002s
May 14 12:28:22.943: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.032488458s
May 14 12:28:25.075: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.164789371s
May 14 12:28:27.423: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.511992362s
May 14 12:28:29.650: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m54.739100489s
STEP: Saw pod success
May 14 12:28:29.650: INFO: Pod "downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6" satisfied condition "Succeeded or Failed"
May 14 12:28:29.712: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6 container client-container: 
STEP: delete the pod
May 14 12:28:30.022: INFO: Waiting for pod downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6 to disappear
May 14 12:28:30.025: INFO: Pod downwardapi-volume-2e51006a-7232-44e5-8b3f-d63bdfae3ba6 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:28:30.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8656" for this suite.

• [SLOW TEST:118.159 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":257,"skipped":4440,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:28:30.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May 14 12:28:30.095: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:28:30.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3912" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":275,"completed":258,"skipped":4440,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:28:30.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on tmpfs
May 14 12:28:30.803: INFO: Waiting up to 5m0s for pod "pod-40dbd0f8-1a8a-4341-80e6-73e877bcaeeb" in namespace "emptydir-9738" to be "Succeeded or Failed"
May 14 12:28:30.809: INFO: Pod "pod-40dbd0f8-1a8a-4341-80e6-73e877bcaeeb": Phase="Pending", Reason="", readiness=false. Elapsed: 5.880988ms
May 14 12:28:32.812: INFO: Pod "pod-40dbd0f8-1a8a-4341-80e6-73e877bcaeeb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009266264s
May 14 12:28:34.815: INFO: Pod "pod-40dbd0f8-1a8a-4341-80e6-73e877bcaeeb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012233932s
May 14 12:28:36.819: INFO: Pod "pod-40dbd0f8-1a8a-4341-80e6-73e877bcaeeb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015635297s
STEP: Saw pod success
May 14 12:28:36.819: INFO: Pod "pod-40dbd0f8-1a8a-4341-80e6-73e877bcaeeb" satisfied condition "Succeeded or Failed"
May 14 12:28:36.822: INFO: Trying to get logs from node kali-worker2 pod pod-40dbd0f8-1a8a-4341-80e6-73e877bcaeeb container test-container: 
STEP: delete the pod
May 14 12:28:36.925: INFO: Waiting for pod pod-40dbd0f8-1a8a-4341-80e6-73e877bcaeeb to disappear
May 14 12:28:36.927: INFO: Pod pod-40dbd0f8-1a8a-4341-80e6-73e877bcaeeb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:28:36.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9738" for this suite.

• [SLOW TEST:6.248 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":259,"skipped":4454,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:28:36.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
May 14 12:28:37.173: INFO: >>> kubeConfig: /root/.kube/config
May 14 12:28:40.610: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:29:02.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4882" for this suite.

• [SLOW TEST:25.930 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":260,"skipped":4470,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:29:02.862: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service endpoint-test2 in namespace services-8209
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8209 to expose endpoints map[]
May 14 12:29:03.253: INFO: Get endpoints failed (6.929326ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
May 14 12:29:04.257: INFO: successfully validated that service endpoint-test2 in namespace services-8209 exposes endpoints map[] (1.010766054s elapsed)
STEP: Creating pod pod1 in namespace services-8209
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8209 to expose endpoints map[pod1:[80]]
May 14 12:29:09.143: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.879579813s elapsed, will retry)
May 14 12:29:16.462: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (12.199272828s elapsed, will retry)
May 14 12:29:22.371: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (18.108006657s elapsed, will retry)
May 14 12:29:34.215: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (29.951580118s elapsed, will retry)
May 14 12:29:42.619: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (38.355797453s elapsed, will retry)
May 14 12:29:49.098: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (44.834630009s elapsed, will retry)
May 14 12:29:58.466: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (54.202734359s elapsed, will retry)
May 14 12:30:05.262: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (1m0.998535891s elapsed, will retry)
May 14 12:30:14.116: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (1m9.853400358s elapsed, will retry)
May 14 12:30:24.680: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (1m20.417367449s elapsed, will retry)
May 14 12:30:34.012: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (1m29.74887246s elapsed, will retry)
May 14 12:30:41.242: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (1m36.979337196s elapsed, will retry)
May 14 12:30:48.866: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (1m44.603246955s elapsed, will retry)
May 14 12:30:54.962: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (1m50.699163923s elapsed, will retry)
May 14 12:31:00.958: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (1m56.695087676s elapsed, will retry)
May 14 12:31:11.621: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (2m7.357822738s elapsed, will retry)
May 14 12:31:16.715: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (2m12.452210667s elapsed, will retry)
May 14 12:31:24.930: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (2m20.666429842s elapsed, will retry)
May 14 12:31:33.612: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (2m29.348605226s elapsed, will retry)
May 14 12:31:39.156: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (2m34.89318393s elapsed, will retry)
May 14 12:31:48.903: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (2m44.639445373s elapsed, will retry)
May 14 12:31:55.768: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (2m51.504763213s elapsed, will retry)
May 14 12:32:02.431: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (2m58.16792316s elapsed, will retry)
May 14 12:32:04.581: INFO: Pod kube-system	coredns-66bff467f8-rvq2k	kali-control-plane	
May 14 12:32:04.581: INFO: Pod kube-system	coredns-66bff467f8-w6zxd	kali-control-plane	
May 14 12:32:04.581: INFO: Pod kube-system	etcd-kali-control-plane	kali-control-plane	
May 14 12:32:04.581: INFO: Pod kube-system	kindnet-65djz	kali-control-plane	
May 14 12:32:04.581: INFO: Pod kube-system	kindnet-f8plf	kali-worker	
May 14 12:32:04.581: INFO: Pod kube-system	kindnet-mcdh2	kali-worker2	
May 14 12:32:04.581: INFO: Pod kube-system	kube-apiserver-kali-control-plane	kali-control-plane	
May 14 12:32:04.581: INFO: Pod kube-system	kube-controller-manager-kali-control-plane	kali-control-plane	
May 14 12:32:04.581: INFO: Pod kube-system	kube-proxy-mmnb6	kali-worker2	
May 14 12:32:04.581: INFO: Pod kube-system	kube-proxy-pnhtq	kali-control-plane	
May 14 12:32:04.581: INFO: Pod kube-system	kube-proxy-vrswj	kali-worker	
May 14 12:32:04.581: INFO: Pod kube-system	kube-scheduler-kali-control-plane	kali-control-plane	
May 14 12:32:04.581: INFO: Pod local-path-storage	local-path-provisioner-bd4bb6b75-6l9ph	kali-control-plane	
May 14 12:32:04.581: INFO: Pod services-8209	pod1	kali-worker2	
May 14 12:32:04.581: FAIL: failed to validate endpoints for service endpoint-test2 in namespace: services-8209
Unexpected error:
    <*errors.errorString | 0xc00278eb90>: {
        s: "Timed out waiting for service endpoint-test2 in namespace services-8209 to expose endpoints map[pod1:[80]] (3m0s elapsed)",
    }
    Timed out waiting for service endpoint-test2 in namespace services-8209 to expose endpoints map[pod1:[80]] (3m0s elapsed)
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func21.4()
	/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:761 +0x70e
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002bf7d00)
	_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:125 +0x324
k8s.io/kubernetes/test/e2e.TestE2E(0xc002bf7d00)
	_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b
testing.tRunner(0xc002bf7d00, 0x4ae8810)
	/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:960 +0x350
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
STEP: Collecting events from namespace "services-8209".
STEP: Found 4 events.
May 14 12:32:04.657: INFO: At 2020-05-14 12:29:04 +0000 UTC - event for pod1: {default-scheduler } Scheduled: Successfully assigned services-8209/pod1 to kali-worker2
May 14 12:32:04.657: INFO: At 2020-05-14 12:29:05 +0000 UTC - event for pod1: {kubelet kali-worker2} Pulled: Container image "us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12" already present on machine
May 14 12:32:04.657: INFO: At 2020-05-14 12:31:05 +0000 UTC - event for pod1: {kubelet kali-worker2} Failed: Error: context deadline exceeded
May 14 12:32:04.657: INFO: At 2020-05-14 12:31:06 +0000 UTC - event for pod1: {kubelet kali-worker2} Failed: Error: failed to reserve container name "pause_pod1_services-8209_8e4d64e0-d0d6-4ca6-a125-a20c42aeaf50_0": name "pause_pod1_services-8209_8e4d64e0-d0d6-4ca6-a125-a20c42aeaf50_0" is reserved for "cb6d17b6df2051a6c6346df2b5462322575512f1730ef98436beab8269347770"
May 14 12:32:04.659: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
May 14 12:32:04.659: INFO: pod1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:29:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:29:04 +0000 UTC ContainersNotReady containers with unready status: [pause]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:29:04 +0000 UTC ContainersNotReady containers with unready status: [pause]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:29:04 +0000 UTC  }]
May 14 12:32:04.659: INFO: 
May 14 12:32:04.661: INFO: 
Logging node info for node kali-control-plane
May 14 12:32:04.663: INFO: Node Info: &Node{ObjectMeta:{kali-control-plane   /api/v1/nodes/kali-control-plane 84a583c8-90fb-49f1-81ac-1fbe141d1a1c 4293249 0 2020-04-29 09:30:59 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kali-control-plane kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2020-04-29 09:31:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 117 98 101 97 100 109 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 114 105 45 115 111 99 107 101 116 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 102 58 110 111 100 101 45 114 111 108 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 115 116 101 114 34 58 123 125 125 125 125],}} {kube-controller-manager Update v1 2020-04-29 09:31:39 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 111 100 101 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 116 116 108 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 100 67 73 68 82 34 58 123 125 44 34 102 58 112 111 100 67 73 68 82 115 34 58 123 34 46 34 58 123 125 44 34 118 58 92 34 49 48 46 50 52 52 46 48 46 48 47 50 52 92 34 34 58 123 125 125 44 34 102 58 116 97 105 110 116 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-14 12:27:47 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 104 111 115 116 110 97 109 101 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 100 100 114 101 115 115 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 72 111 115 116 110 97 109 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 116 101 114 110 97 108 73 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 97 108 108 111 99 97 116 97 98 108 101 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 68 105 115 107 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 77 101 109 111 114 121 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 73 68 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 100 97 101 109 111 110 69 110 100 112 111 105 110 116 115 34 58 123 34 102 58 107 117 98 101 108 101 116 69 110 100 112 111 105 110 116 34 58 123 34 102 58 80 111 114 116 34 58 123 125 125 125 44 34 102 58 105 109 97 103 101 115 34 58 123 125 44 34 102 58 110 111 100 101 73 110 102 111 34 58 123 34 102 58 97 114 99 104 105 116 101 99 116 117 114 101 34 58 123 125 44 34 102 58 98 111 111 116 73 68 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 82 117 110 116 105 109 101 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 101 114 110 101 108 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 80 114 111 120 121 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 108 101 116 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 109 97 99 104 105 110 101 73 68 34 58 123 125 44 34 102 58 111 112 101 114 97 116 105 110 103 83 121 115 116 101 109 34 58 123 125 44 34 102 58 111 115 73 109 97 103 101 34 58 123 125 44 34 102 58 115 121 115 116 101 109 85 85 73 68 34 58 123 125 125 125 125],}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-05-14 12:27:47 +0000 UTC,LastTransitionTime:2020-04-29 09:30:56 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-05-14 12:27:47 +0000 UTC,LastTransitionTime:2020-04-29 09:30:56 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-05-14 12:27:47 +0000 UTC,LastTransitionTime:2020-04-29 09:30:56 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-05-14 12:27:47 +0000 UTC,LastTransitionTime:2020-04-29 09:31:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.19,},NodeAddress{Type:Hostname,Address:kali-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2146cf85bed648199604ab2e0e9ac609,SystemUUID:e83c0db4-babe-44fc-9dad-b5eeae6d23fd,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.18.2,KubeProxyVersion:v1.18.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.2],SizeBytes:146648881,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.2],SizeBytes:132860030,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.2],SizeBytes:132826433,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.2],SizeBytes:113095985,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 14 12:32:04.663: INFO: 
Logging kubelet events for node kali-control-plane
May 14 12:32:04.664: INFO: 
Logging pods the kubelet thinks is on node kali-control-plane
May 14 12:32:04.677: INFO: kube-apiserver-kali-control-plane started at 2020-04-29 09:31:04 +0000 UTC (0+1 container statuses recorded)
May 14 12:32:04.677: INFO: 	Container kube-apiserver ready: true, restart count 0
May 14 12:32:04.677: INFO: kube-controller-manager-kali-control-plane started at 2020-04-29 09:31:04 +0000 UTC (0+1 container statuses recorded)
May 14 12:32:04.677: INFO: 	Container kube-controller-manager ready: true, restart count 1
May 14 12:32:04.677: INFO: kube-scheduler-kali-control-plane started at 2020-04-29 09:31:04 +0000 UTC (0+1 container statuses recorded)
May 14 12:32:04.677: INFO: 	Container kube-scheduler ready: true, restart count 0
May 14 12:32:04.677: INFO: kube-proxy-pnhtq started at 2020-04-29 09:31:19 +0000 UTC (0+1 container statuses recorded)
May 14 12:32:04.677: INFO: 	Container kube-proxy ready: true, restart count 0
May 14 12:32:04.677: INFO: kindnet-65djz started at 2020-04-29 09:31:19 +0000 UTC (0+1 container statuses recorded)
May 14 12:32:04.677: INFO: 	Container kindnet-cni ready: true, restart count 0
May 14 12:32:04.677: INFO: coredns-66bff467f8-w6zxd started at 2020-04-29 09:31:37 +0000 UTC (0+1 container statuses recorded)
May 14 12:32:04.677: INFO: 	Container coredns ready: true, restart count 0
May 14 12:32:04.677: INFO: local-path-provisioner-bd4bb6b75-6l9ph started at 2020-04-29 09:31:37 +0000 UTC (0+1 container statuses recorded)
May 14 12:32:04.677: INFO: 	Container local-path-provisioner ready: true, restart count 0
May 14 12:32:04.677: INFO: etcd-kali-control-plane started at 2020-04-29 09:31:04 +0000 UTC (0+1 container statuses recorded)
May 14 12:32:04.677: INFO: 	Container etcd ready: true, restart count 0
May 14 12:32:04.677: INFO: coredns-66bff467f8-rvq2k started at 2020-04-29 09:31:37 +0000 UTC (0+1 container statuses recorded)
May 14 12:32:04.677: INFO: 	Container coredns ready: true, restart count 0
W0514 12:32:04.705879       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May 14 12:32:04.763: INFO: 
Latency metrics for node kali-control-plane
May 14 12:32:04.763: INFO: 
Logging node info for node kali-worker
May 14 12:32:04.765: INFO: Node Info: &Node{ObjectMeta:{kali-worker   /api/v1/nodes/kali-worker d9882acc-073c-45e9-9299-9096bf571d2e 4293201 0 2020-04-29 09:31:36 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kali-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2020-04-29 09:31:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 117 98 101 97 100 109 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 114 105 45 115 111 99 107 101 116 34 58 123 125 125 125 125],}} {kube-controller-manager Update v1 2020-04-29 09:32:06 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 111 100 101 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 116 116 108 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 100 67 73 68 82 34 58 123 125 44 34 102 58 112 111 100 67 73 68 82 115 34 58 123 34 46 34 58 123 125 44 34 118 58 92 34 49 48 46 50 52 52 46 50 46 48 47 50 52 92 34 34 58 123 125 125 125 125],}} {kubelet Update v1 2020-05-14 12:27:21 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 104 111 115 116 110 97 109 101 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 100 100 114 101 115 115 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 72 111 115 116 110 97 109 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 116 101 114 110 97 108 73 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 97 108 108 111 99 97 116 97 98 108 101 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 68 105 115 107 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 77 101 109 111 114 121 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 73 68 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 100 97 101 109 111 110 69 110 100 112 111 105 110 116 115 34 58 123 34 102 58 107 117 98 101 108 101 116 69 110 100 112 111 105 110 116 34 58 123 34 102 58 80 111 114 116 34 58 123 125 125 125 44 34 102 58 105 109 97 103 101 115 34 58 123 125 44 34 102 58 110 111 100 101 73 110 102 111 34 58 123 34 102 58 97 114 99 104 105 116 101 99 116 117 114 101 34 58 123 125 44 34 102 58 98 111 111 116 73 68 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 82 117 110 116 105 109 101 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 101 114 110 101 108 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 80 114 111 120 121 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 108 101 116 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 109 97 99 104 105 110 101 73 68 34 58 123 125 44 34 102 58 111 112 101 114 97 116 105 110 103 83 121 115 116 101 109 34 58 123 125 44 34 102 58 111 115 73 109 97 103 101 34 58 123 125 44 34 102 58 115 121 115 116 101 109 85 85 73 68 34 58 123 125 125 125 125],}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-05-14 12:27:21 +0000 UTC,LastTransitionTime:2020-04-29 09:31:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-05-14 12:27:21 +0000 UTC,LastTransitionTime:2020-04-29 09:31:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-05-14 12:27:21 +0000 UTC,LastTransitionTime:2020-04-29 09:31:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-05-14 12:27:21 +0000 UTC,LastTransitionTime:2020-04-29 09:32:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.15,},NodeAddress{Type:Hostname,Address:kali-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e96e6d32a4f2448f9fda0690bf27c25a,SystemUUID:62c26944-edd7-4df2-a453-f2dbfa247f6d,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.18.2,KubeProxyVersion:v1.18.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:07e93f55decdc1224fb8d161edb5617d58e3488c1250168337548ccc3e82f6b7 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386164043,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:141a336f17eaf068dbe8da4b01a832033aed5c09e7fa6349ec091ee30b76c9b1 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360403156,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:8c84761d2d906e344bc6a85a11451d35696cf684305555611df16ce2615ac816 docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351094667,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:19c6d11d2678c44822f07c01c574fed426e3c99003b6af0410f0911d57939d5a docker.io/ollivier/clearwater-homer:latest],SizeBytes:343984685,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:f365f3b72267bef0fd696e4a93c0f3c19fb65ad42a8850fe22873dbadd03fdba docker.io/ollivier/clearwater-astaire:latest],SizeBytes:326777758,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:eb98596100b1553c9814b6185863ec53e743eb0370faeeafe16fc1dfe8d02ec3 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303283801,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:44590682de48854faeccc1f4c7de39cb666014a0c4e3abd93adcccad3208a6e2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298307172,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:0b3c89ab451b09e347657d5f85ed99d47ec3e8689b98916af72b23576926b08d docker.io/ollivier/clearwater-homestead:latest],SizeBytes:294847386,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:20069a8d9f366dd0f003afa7c4fbcbcd5e9d2b99abae83540c6538fc7cff6b97 docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287124270,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:8ddcfa68c82ebf0b4ce6add019a8f57c024aec453f47a37017cf7dff8680268a docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285184449,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.2],SizeBytes:146648881,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.2],SizeBytes:132860030,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.2],SizeBytes:132826433,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:5a7b70d343cfaeff79f6e6a8f473983a5eb7ca52f723aa8aa226aad4ee5b96e3 docker.io/aquasec/kube-hunter:latest],SizeBytes:125323634,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:795d89480038d62363491066edd962a3f0042c338d4d9feb3f4db23ac659fb40],SizeBytes:124499152,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.2],SizeBytes:113095985,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:c2efaddff058c146b93517d06a3a8066b6e88fecdd98fa6847cb69db22555f04 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:46948523,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9 us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13],SizeBytes:45704260,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12],SizeBytes:45599269,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:1e2b01ec091289327cd7e1b527c11b95db710ace489c9bd665c0d771c0225729],SizeBytes:8039938,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:9d86125c0409a16346857dbda530cf29583c87f186281745f539c12e3dcd38a7 docker.io/aquasec/kube-bench:latest],SizeBytes:8039918,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:ee55386ef35bea93a3a0900fd714038bebd156e0448addf839f38093dbbaace9],SizeBytes:8029111,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:a8cf7ff6367c2afa2a90acd081b484cbded349a7076e7bdf37a05279f276bc12],SizeBytes:764955,},ContainerImage{Names:[docker.io/library/busybox@sha256:836945da1f3afe2cfff376d379852bbb82e0237cb2925d53a13f53d6e8a8c48c docker.io/library/busybox:latest],SizeBytes:764948,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 14 12:32:04.766: INFO: 
Logging kubelet events for node kali-worker
May 14 12:32:04.768: INFO: 
Logging pods the kubelet thinks is on node kali-worker
May 14 12:32:04.782: INFO: kindnet-f8plf started at 2020-04-29 09:31:40 +0000 UTC (0+1 container statuses recorded)
May 14 12:32:04.782: INFO: 	Container kindnet-cni ready: true, restart count 1
May 14 12:32:04.782: INFO: kube-proxy-vrswj started at 2020-04-29 09:31:40 +0000 UTC (0+1 container statuses recorded)
May 14 12:32:04.782: INFO: 	Container kube-proxy ready: true, restart count 0
W0514 12:32:04.786047       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May 14 12:32:04.823: INFO: 
Latency metrics for node kali-worker
May 14 12:32:04.823: INFO: 
Logging node info for node kali-worker2
May 14 12:32:04.826: INFO: Node Info: &Node{ObjectMeta:{kali-worker2   /api/v1/nodes/kali-worker2 6eb4ebcc-ce4f-4a4d-bd7f-5f7e293c044e 4293729 0 2020-04-29 09:31:36 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kali-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2020-04-29 09:31:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 117 98 101 97 100 109 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 114 105 45 115 111 99 107 101 116 34 58 123 125 125 125 125],}} {kube-controller-manager Update v1 2020-04-29 09:32:06 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 111 100 101 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 116 116 108 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 100 67 73 68 82 34 58 123 125 44 34 102 58 112 111 100 67 73 68 82 115 34 58 123 34 46 34 58 123 125 44 34 118 58 92 34 49 48 46 50 52 52 46 49 46 48 47 50 52 92 34 34 58 123 125 125 125 125],}} {kubelet Update v1 2020-05-14 12:30:55 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 104 111 115 116 110 97 109 101 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 100 100 114 101 115 115 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 72 111 115 116 110 97 109 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 116 101 114 110 97 108 73 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 97 108 108 111 99 97 116 97 98 108 101 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 68 105 115 107 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 77 101 109 111 114 121 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 73 68 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 100 97 101 109 111 110 69 110 100 112 111 105 110 116 115 34 58 123 34 102 58 107 117 98 101 108 101 116 69 110 100 112 111 105 110 116 34 58 123 34 102 58 80 111 114 116 34 58 123 125 125 125 44 34 102 58 105 109 97 103 101 115 34 58 123 125 44 34 102 58 110 111 100 101 73 110 102 111 34 58 123 34 102 58 97 114 99 104 105 116 101 99 116 117 114 101 34 58 123 125 44 34 102 58 98 111 111 116 73 68 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 82 117 110 116 105 109 101 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 101 114 110 101 108 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 80 114 111 120 121 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 108 101 116 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 109 97 99 104 105 110 101 73 68 34 58 123 125 44 34 102 58 111 112 101 114 97 116 105 110 103 83 121 115 116 101 109 34 58 123 125 44 34 102 58 111 115 73 109 97 103 101 34 58 123 125 44 34 102 58 115 121 115 116 101 109 85 85 73 68 34 58 123 125 125 125 125],}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-05-14 12:30:55 +0000 UTC,LastTransitionTime:2020-04-29 09:31:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-05-14 12:30:55 +0000 UTC,LastTransitionTime:2020-04-29 09:31:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-05-14 12:30:55 +0000 UTC,LastTransitionTime:2020-04-29 09:31:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-05-14 12:30:55 +0000 UTC,LastTransitionTime:2020-04-29 09:32:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.18,},NodeAddress{Type:Hostname,Address:kali-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e6c808dc84074a009430113a4db25a88,SystemUUID:a7f2e4d4-2bac-4d1a-b10e-f9b7d6d56664,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.18.2,KubeProxyVersion:v1.18.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:07e93f55decdc1224fb8d161edb5617d58e3488c1250168337548ccc3e82f6b7 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386164043,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:141a336f17eaf068dbe8da4b01a832033aed5c09e7fa6349ec091ee30b76c9b1 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360403156,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:8c84761d2d906e344bc6a85a11451d35696cf684305555611df16ce2615ac816 docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351094667,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:19c6d11d2678c44822f07c01c574fed426e3c99003b6af0410f0911d57939d5a docker.io/ollivier/clearwater-homer:latest],SizeBytes:343984685,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:f365f3b72267bef0fd696e4a93c0f3c19fb65ad42a8850fe22873dbadd03fdba docker.io/ollivier/clearwater-astaire:latest],SizeBytes:326777758,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:eb98596100b1553c9814b6185863ec53e743eb0370faeeafe16fc1dfe8d02ec3 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303283801,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:44590682de48854faeccc1f4c7de39cb666014a0c4e3abd93adcccad3208a6e2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298307172,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:0b3c89ab451b09e347657d5f85ed99d47ec3e8689b98916af72b23576926b08d docker.io/ollivier/clearwater-homestead:latest],SizeBytes:294847386,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:20069a8d9f366dd0f003afa7c4fbcbcd5e9d2b99abae83540c6538fc7cff6b97 docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287124270,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:8ddcfa68c82ebf0b4ce6add019a8f57c024aec453f47a37017cf7dff8680268a docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285184449,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.2],SizeBytes:146648881,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.2],SizeBytes:132860030,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.2],SizeBytes:132826433,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:5a7b70d343cfaeff79f6e6a8f473983a5eb7ca52f723aa8aa226aad4ee5b96e3 docker.io/aquasec/kube-hunter:latest],SizeBytes:125323634,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:795d89480038d62363491066edd962a3f0042c338d4d9feb3f4db23ac659fb40],SizeBytes:124499152,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.2],SizeBytes:113095985,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12f377200949c25fde1e54bba639d34d119edd7cfcfb1d117526dba677c03c85 k8s.gcr.io/etcd:3.4.7],SizeBytes:104221097,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:c2efaddff058c146b93517d06a3a8066b6e88fecdd98fa6847cb69db22555f04 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:46948523,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9 us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13],SizeBytes:45704260,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12],SizeBytes:45599269,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:1e2b01ec091289327cd7e1b527c11b95db710ace489c9bd665c0d771c0225729],SizeBytes:8039938,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:9d86125c0409a16346857dbda530cf29583c87f186281745f539c12e3dcd38a7 docker.io/aquasec/kube-bench:latest],SizeBytes:8039918,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:a8cf7ff6367c2afa2a90acd081b484cbded349a7076e7bdf37a05279f276bc12],SizeBytes:764955,},ContainerImage{Names:[docker.io/library/busybox@sha256:836945da1f3afe2cfff376d379852bbb82e0237cb2925d53a13f53d6e8a8c48c docker.io/library/busybox:latest],SizeBytes:764948,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 14 12:32:04.826: INFO: 
Logging kubelet events for node kali-worker2
May 14 12:32:04.828: INFO: 
Logging pods the kubelet thinks is on node kali-worker2
May 14 12:32:04.843: INFO: kube-proxy-mmnb6 started at 2020-04-29 09:31:40 +0000 UTC (0+1 container statuses recorded)
May 14 12:32:04.843: INFO: 	Container kube-proxy ready: true, restart count 0
May 14 12:32:04.843: INFO: pod1 started at 2020-05-14 12:29:04 +0000 UTC (0+1 container statuses recorded)
May 14 12:32:04.843: INFO: 	Container pause ready: false, restart count 0
May 14 12:32:04.843: INFO: kindnet-mcdh2 started at 2020-04-29 09:31:40 +0000 UTC (0+1 container statuses recorded)
May 14 12:32:04.843: INFO: 	Container kindnet-cni ready: true, restart count 0
W0514 12:32:04.846460       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May 14 12:32:04.879: INFO: 
Latency metrics for node kali-worker2
May 14 12:32:04.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8209" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• Failure [182.077 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703

  May 14 12:32:04.581: failed to validate endpoints for service endpoint-test2 in namespace: services-8209
  Unexpected error:
      <*errors.errorString | 0xc00278eb90>: {
          s: "Timed out waiting for service endpoint-test2 in namespace services-8209 to expose endpoints map[pod1:[80]] (3m0s elapsed)",
      }
      Timed out waiting for service endpoint-test2 in namespace services-8209 to expose endpoints map[pod1:[80]] (3m0s elapsed)
  occurred

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:761
------------------------------
{"msg":"FAILED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":275,"completed":260,"skipped":4510,"failed":2,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:32:04.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
May 14 12:33:24.767: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:33:24.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1438" for this suite.

• [SLOW TEST:79.986 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":261,"skipped":4518,"failed":2,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:33:24.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:34:31.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7218" for this suite.

• [SLOW TEST:68.865 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":262,"skipped":4529,"failed":2,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:34:33.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
May 14 12:35:46.740: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:35:49.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6456" for this suite.

• [SLOW TEST:76.050 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":263,"skipped":4562,"failed":2,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:35:49.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 14 12:35:51.488: INFO: Waiting up to 5m0s for pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb" in namespace "projected-8033" to be "Succeeded or Failed"
May 14 12:35:51.908: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 420.350659ms
May 14 12:35:54.150: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.662289189s
May 14 12:35:57.706: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.217831207s
May 14 12:36:01.600: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.111805062s
May 14 12:36:03.611: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 12.122528282s
May 14 12:36:06.486: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 14.998019596s
May 14 12:36:08.678: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 17.189964659s
May 14 12:36:10.701: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 19.21315139s
May 14 12:36:12.833: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 21.344541622s
May 14 12:36:14.867: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 23.37876135s
May 14 12:36:18.196: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 26.707842673s
May 14 12:36:20.200: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 28.711674834s
May 14 12:36:22.389: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 30.900993277s
May 14 12:36:24.394: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 32.905754332s
May 14 12:36:26.452: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 34.963825826s
May 14 12:36:29.296: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 37.80749134s
May 14 12:36:31.325: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 39.836481979s
May 14 12:36:33.971: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 42.482752892s
May 14 12:36:36.932: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 45.44426859s
May 14 12:36:40.037: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 48.549038498s
May 14 12:36:43.402: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 51.914058292s
May 14 12:36:47.398: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 55.910224595s
May 14 12:36:50.440: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 58.952019589s
May 14 12:36:53.163: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m1.674798519s
May 14 12:36:56.175: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.687132656s
May 14 12:36:58.432: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.943891713s
May 14 12:37:03.123: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m11.634783348s
May 14 12:37:06.977: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m15.48892782s
May 14 12:37:10.500: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m19.01182944s
May 14 12:37:12.510: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m21.02150216s
May 14 12:37:14.512: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m23.0240269s
May 14 12:37:16.816: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m25.327541665s
May 14 12:37:18.818: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m27.330203504s
May 14 12:37:20.827: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m29.33880075s
May 14 12:37:23.261: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m31.772791278s
May 14 12:37:25.266: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m33.777925162s
May 14 12:37:27.269: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m35.780535912s
May 14 12:37:29.810: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.322284954s
May 14 12:37:32.241: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.753385831s
May 14 12:37:35.732: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.244462296s
May 14 12:37:38.036: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.547508852s
May 14 12:37:40.039: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.550662191s
May 14 12:37:42.043: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.554760866s
May 14 12:37:44.046: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.558180226s
May 14 12:37:46.275: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.786837207s
May 14 12:37:50.576: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m59.088267912s
May 14 12:37:53.401: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m1.913412996s
May 14 12:37:55.834: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.345567232s
May 14 12:37:58.032: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.543784405s
May 14 12:38:01.582: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.094333102s
May 14 12:38:03.586: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.098021538s
May 14 12:38:05.739: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.250715785s
May 14 12:38:08.269: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.781086274s
May 14 12:38:10.433: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.944566172s
May 14 12:38:13.493: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.005172182s
May 14 12:38:15.497: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.009165333s
May 14 12:38:17.787: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.299387645s
May 14 12:38:20.802: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m29.314023465s
May 14 12:38:23.430: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m31.941504993s
May 14 12:38:26.559: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m35.070658131s
May 14 12:38:28.563: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m37.074673583s
May 14 12:38:31.022: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m39.534415208s
May 14 12:38:33.044: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m41.555911687s
May 14 12:38:36.045: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.557330122s
May 14 12:38:38.926: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m47.437579771s
May 14 12:38:40.929: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m49.440576691s
May 14 12:38:43.031: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m51.543276833s
May 14 12:38:45.078: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m53.589687337s
May 14 12:38:48.411: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.922818598s
May 14 12:38:50.415: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.926839014s
May 14 12:38:52.587: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m1.099140147s
May 14 12:38:55.128: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m3.639834364s
May 14 12:38:57.836: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.347478797s
May 14 12:38:59.838: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.350343456s
May 14 12:39:01.842: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.353560412s
May 14 12:39:03.864: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.375625917s
May 14 12:39:06.756: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m15.268345664s
May 14 12:39:09.283: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m17.794620951s
May 14 12:39:11.286: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m19.798294481s
May 14 12:39:14.707: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m23.218739322s
May 14 12:39:16.710: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m25.222354634s
May 14 12:39:18.882: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m27.394155527s
May 14 12:39:20.885: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m29.397004753s
May 14 12:39:22.892: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m31.403894591s
May 14 12:39:24.900: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m33.412284481s
May 14 12:39:27.134: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m35.645671283s
May 14 12:39:29.884: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.396280906s
May 14 12:39:32.698: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m41.209613818s
May 14 12:39:35.641: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.15319138s
May 14 12:39:37.645: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.156568308s
May 14 12:39:40.413: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.924972505s
May 14 12:39:42.514: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m51.02640015s
May 14 12:39:44.518: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m53.029719309s
May 14 12:39:47.940: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.451670894s
May 14 12:39:52.077: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.589036845s
May 14 12:39:54.615: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4m3.126915716s
May 14 12:39:56.776: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4m5.28769869s
May 14 12:39:59.699: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.211024408s
May 14 12:40:02.770: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4m11.281892558s
May 14 12:40:05.231: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4m13.742695209s
May 14 12:40:07.234: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4m15.745582356s
May 14 12:40:09.571: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.083402339s
May 14 12:40:11.575: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.087331952s
May 14 12:40:13.643: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.155043883s
May 14 12:40:15.755: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.266576859s
May 14 12:40:19.297: INFO: Pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb": Phase="Failed", Reason="", readiness=false. Elapsed: 4m27.808894834s
May 14 12:40:19.371: INFO: Output of node "kali-worker2" pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb" container "client-container": failed to try resolving symlinks in path "/var/log/pods/projected-8033_downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb_07dbd6d7-a3e1-4a80-8029-7d049280269b/client-container/0.log": lstat /var/log/pods/projected-8033_downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb_07dbd6d7-a3e1-4a80-8029-7d049280269b/client-container/0.log: no such file or directory
STEP: delete the pod
May 14 12:40:20.261: INFO: Waiting for pod downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb to disappear
May 14 12:40:20.487: INFO: Pod downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb no longer exists
May 14 12:40:20.488: FAIL: Unexpected error:
    <*errors.errorString | 0xc0024a2b70>: {
        s: "expected pod \"downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb\" success: pod \"downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 12:35:53 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 12:35:53 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 12:35:53 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 12:35:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.18 PodIP:10.244.1.162 PodIPs:[{IP:10.244.1.162}] StartTime:2020-05-14 12:35:53 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:client-container State:{Waiting:&ContainerStateWaiting{Reason:CreateContainerError,Message:failed to reserve container name \"client-container_downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb_projected-8033_07dbd6d7-a3e1-4a80-8029-7d049280269b_0\": name \"client-container_downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb_projected-8033_07dbd6d7-a3e1-4a80-8029-7d049280269b_0\" is reserved for \"d5ea060bb2a3ffd6391191979974d1b5a101b5b0913ad0804b4c6cdef71749da\",} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:,Message:,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:containerd://d5ea060bb2a3ffd6391191979974d1b5a101b5b0913ad0804b4c6cdef71749da,}} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest:1.0 ImageID:gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 ContainerID:containerd://d5ea060bb2a3ffd6391191979974d1b5a101b5b0913ad0804b4c6cdef71749da Started:0xc0040f8e0b}] QOSClass:BestEffort EphemeralContainerStatuses:[]}",
    }
    expected pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb" success: pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 12:35:53 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 12:35:53 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 12:35:53 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 12:35:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.18 PodIP:10.244.1.162 PodIPs:[{IP:10.244.1.162}] StartTime:2020-05-14 12:35:53 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:client-container State:{Waiting:&ContainerStateWaiting{Reason:CreateContainerError,Message:failed to reserve container name "client-container_downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb_projected-8033_07dbd6d7-a3e1-4a80-8029-7d049280269b_0": name "client-container_downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb_projected-8033_07dbd6d7-a3e1-4a80-8029-7d049280269b_0" is reserved for "d5ea060bb2a3ffd6391191979974d1b5a101b5b0913ad0804b4c6cdef71749da",} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:,Message:,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:containerd://d5ea060bb2a3ffd6391191979974d1b5a101b5b0913ad0804b4c6cdef71749da,}} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest:1.0 ImageID:gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 ContainerID:containerd://d5ea060bb2a3ffd6391191979974d1b5a101b5b0913ad0804b4c6cdef71749da Started:0xc0040f8e0b}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc0002b82c0, 0x4963cc9, 0x1a, 0xc0016c9800, 0x0, 0xc00496b2b0, 0x1, 0x1, 0x4aeca10)
	/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:798 +0x1ee
k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...)
	/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:561
k8s.io/kubernetes/test/e2e/common.glob..func23.3()
	/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:71 +0x144
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002bf7d00)
	_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:125 +0x324
k8s.io/kubernetes/test/e2e.TestE2E(0xc002bf7d00)
	_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b
testing.tRunner(0xc002bf7d00, 0x4ae8810)
	/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:960 +0x350
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
STEP: Collecting events from namespace "projected-8033".
STEP: Found 4 events.
May 14 12:40:20.555: INFO: At 2020-05-14 12:35:51 +0000 UTC - event for downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb: {default-scheduler } Scheduled: Successfully assigned projected-8033/downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb to kali-worker2
May 14 12:40:20.555: INFO: At 2020-05-14 12:35:55 +0000 UTC - event for downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb: {kubelet kali-worker2} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/mounttest:1.0" already present on machine
May 14 12:40:20.555: INFO: At 2020-05-14 12:37:55 +0000 UTC - event for downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb: {kubelet kali-worker2} Failed: Error: context deadline exceeded
May 14 12:40:20.555: INFO: At 2020-05-14 12:37:55 +0000 UTC - event for downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb: {kubelet kali-worker2} Failed: Error: failed to reserve container name "client-container_downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb_projected-8033_07dbd6d7-a3e1-4a80-8029-7d049280269b_0": name "client-container_downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb_projected-8033_07dbd6d7-a3e1-4a80-8029-7d049280269b_0" is reserved for "d5ea060bb2a3ffd6391191979974d1b5a101b5b0913ad0804b4c6cdef71749da"
May 14 12:40:21.963: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
May 14 12:40:21.963: INFO: 
May 14 12:40:22.452: INFO: 
Logging node info for node kali-control-plane
May 14 12:40:22.454: INFO: Node Info: &Node{ObjectMeta:{kali-control-plane   /api/v1/nodes/kali-control-plane 84a583c8-90fb-49f1-81ac-1fbe141d1a1c 4294641 0 2020-04-29 09:30:59 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kali-control-plane kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2020-04-29 09:31:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 117 98 101 97 100 109 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 114 105 45 115 111 99 107 101 116 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 102 58 110 111 100 101 45 114 111 108 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 115 116 101 114 34 58 123 125 125 125 125],}} {kube-controller-manager Update v1 2020-04-29 09:31:39 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 111 100 101 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 116 116 108 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 100 67 73 68 82 34 58 123 125 44 34 102 58 112 111 100 67 73 68 82 115 34 58 123 34 46 34 58 123 125 44 34 118 58 92 34 49 48 46 50 52 52 46 48 46 48 47 50 52 92 34 34 58 123 125 125 44 34 102 58 116 97 105 110 116 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-14 12:37:47 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 104 111 115 116 110 97 109 101 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 100 100 114 101 115 115 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 72 111 115 116 110 97 109 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 116 101 114 110 97 108 73 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 97 108 108 111 99 97 116 97 98 108 101 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 68 105 115 107 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 77 101 109 111 114 121 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 73 68 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 100 97 101 109 111 110 69 110 100 112 111 105 110 116 115 34 58 123 34 102 58 107 117 98 101 108 101 116 69 110 100 112 111 105 110 116 34 58 123 34 102 58 80 111 114 116 34 58 123 125 125 125 44 34 102 58 105 109 97 103 101 115 34 58 123 125 44 34 102 58 110 111 100 101 73 110 102 111 34 58 123 34 102 58 97 114 99 104 105 116 101 99 116 117 114 101 34 58 123 125 44 34 102 58 98 111 111 116 73 68 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 82 117 110 116 105 109 101 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 101 114 110 101 108 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 80 114 111 120 121 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 108 101 116 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 109 97 99 104 105 110 101 73 68 34 58 123 125 44 34 102 58 111 112 101 114 97 116 105 110 103 83 121 115 116 101 109 34 58 123 125 44 34 102 58 111 115 73 109 97 103 101 34 58 123 125 44 34 102 58 115 121 115 116 101 109 85 85 73 68 34 58 123 125 125 125 125],}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-05-14 12:37:47 +0000 UTC,LastTransitionTime:2020-04-29 09:30:56 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-05-14 12:37:47 +0000 UTC,LastTransitionTime:2020-04-29 09:30:56 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-05-14 12:37:47 +0000 UTC,LastTransitionTime:2020-04-29 09:30:56 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-05-14 12:37:47 +0000 UTC,LastTransitionTime:2020-04-29 09:31:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.19,},NodeAddress{Type:Hostname,Address:kali-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2146cf85bed648199604ab2e0e9ac609,SystemUUID:e83c0db4-babe-44fc-9dad-b5eeae6d23fd,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.18.2,KubeProxyVersion:v1.18.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.2],SizeBytes:146648881,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.2],SizeBytes:132860030,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.2],SizeBytes:132826433,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.2],SizeBytes:113095985,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 14 12:40:22.455: INFO: 
Logging kubelet events for node kali-control-plane
May 14 12:40:23.087: INFO: 
Logging pods the kubelet thinks is on node kali-control-plane
May 14 12:40:23.109: INFO: kube-controller-manager-kali-control-plane started at 2020-04-29 09:31:04 +0000 UTC (0+1 container statuses recorded)
May 14 12:40:23.109: INFO: 	Container kube-controller-manager ready: true, restart count 1
May 14 12:40:23.109: INFO: kube-scheduler-kali-control-plane started at 2020-04-29 09:31:04 +0000 UTC (0+1 container statuses recorded)
May 14 12:40:23.109: INFO: 	Container kube-scheduler ready: true, restart count 0
May 14 12:40:23.109: INFO: kube-proxy-pnhtq started at 2020-04-29 09:31:19 +0000 UTC (0+1 container statuses recorded)
May 14 12:40:23.109: INFO: 	Container kube-proxy ready: true, restart count 0
May 14 12:40:23.109: INFO: kindnet-65djz started at 2020-04-29 09:31:19 +0000 UTC (0+1 container statuses recorded)
May 14 12:40:23.109: INFO: 	Container kindnet-cni ready: true, restart count 0
May 14 12:40:23.109: INFO: coredns-66bff467f8-w6zxd started at 2020-04-29 09:31:37 +0000 UTC (0+1 container statuses recorded)
May 14 12:40:23.109: INFO: 	Container coredns ready: true, restart count 0
May 14 12:40:23.109: INFO: local-path-provisioner-bd4bb6b75-6l9ph started at 2020-04-29 09:31:37 +0000 UTC (0+1 container statuses recorded)
May 14 12:40:23.109: INFO: 	Container local-path-provisioner ready: true, restart count 0
May 14 12:40:23.109: INFO: kube-apiserver-kali-control-plane started at 2020-04-29 09:31:04 +0000 UTC (0+1 container statuses recorded)
May 14 12:40:23.109: INFO: 	Container kube-apiserver ready: true, restart count 0
May 14 12:40:23.109: INFO: coredns-66bff467f8-rvq2k started at 2020-04-29 09:31:37 +0000 UTC (0+1 container statuses recorded)
May 14 12:40:23.109: INFO: 	Container coredns ready: true, restart count 0
May 14 12:40:23.109: INFO: etcd-kali-control-plane started at 2020-04-29 09:31:04 +0000 UTC (0+1 container statuses recorded)
May 14 12:40:23.109: INFO: 	Container etcd ready: true, restart count 0
W0514 12:40:23.112798       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May 14 12:40:23.187: INFO: 
Latency metrics for node kali-control-plane
May 14 12:40:23.187: INFO: 
Logging node info for node kali-worker
May 14 12:40:23.190: INFO: Node Info: &Node{ObjectMeta:{kali-worker   /api/v1/nodes/kali-worker d9882acc-073c-45e9-9299-9096bf571d2e 4294586 0 2020-04-29 09:31:36 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kali-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2020-04-29 09:31:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 117 98 101 97 100 109 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 114 105 45 115 111 99 107 101 116 34 58 123 125 125 125 125],}} {kube-controller-manager Update v1 2020-04-29 09:32:06 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 111 100 101 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 116 116 108 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 100 67 73 68 82 34 58 123 125 44 34 102 58 112 111 100 67 73 68 82 115 34 58 123 34 46 34 58 123 125 44 34 118 58 92 34 49 48 46 50 52 52 46 50 46 48 47 50 52 92 34 34 58 123 125 125 125 125],}} {kubelet Update v1 2020-05-14 12:37:22 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 104 111 115 116 110 97 109 101 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 100 100 114 101 115 115 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 72 111 115 116 110 97 109 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 116 101 114 110 97 108 73 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 97 108 108 111 99 97 116 97 98 108 101 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 68 105 115 107 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 77 101 109 111 114 121 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 73 68 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 100 97 101 109 111 110 69 110 100 112 111 105 110 116 115 34 58 123 34 102 58 107 117 98 101 108 101 116 69 110 100 112 111 105 110 116 34 58 123 34 102 58 80 111 114 116 34 58 123 125 125 125 44 34 102 58 105 109 97 103 101 115 34 58 123 125 44 34 102 58 110 111 100 101 73 110 102 111 34 58 123 34 102 58 97 114 99 104 105 116 101 99 116 117 114 101 34 58 123 125 44 34 102 58 98 111 111 116 73 68 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 82 117 110 116 105 109 101 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 101 114 110 101 108 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 80 114 111 120 121 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 108 101 116 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 109 97 99 104 105 110 101 73 68 34 58 123 125 44 34 102 58 111 112 101 114 97 116 105 110 103 83 121 115 116 101 109 34 58 123 125 44 34 102 58 111 115 73 109 97 103 101 34 58 123 125 44 34 102 58 115 121 115 116 101 109 85 85 73 68 34 58 123 125 125 125 125],}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-05-14 12:37:22 +0000 UTC,LastTransitionTime:2020-04-29 09:31:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-05-14 12:37:22 +0000 UTC,LastTransitionTime:2020-04-29 09:31:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-05-14 12:37:22 +0000 UTC,LastTransitionTime:2020-04-29 09:31:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-05-14 12:37:22 +0000 UTC,LastTransitionTime:2020-04-29 09:32:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.15,},NodeAddress{Type:Hostname,Address:kali-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e96e6d32a4f2448f9fda0690bf27c25a,SystemUUID:62c26944-edd7-4df2-a453-f2dbfa247f6d,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.18.2,KubeProxyVersion:v1.18.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:07e93f55decdc1224fb8d161edb5617d58e3488c1250168337548ccc3e82f6b7 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386164043,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:141a336f17eaf068dbe8da4b01a832033aed5c09e7fa6349ec091ee30b76c9b1 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360403156,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:8c84761d2d906e344bc6a85a11451d35696cf684305555611df16ce2615ac816 docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351094667,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:19c6d11d2678c44822f07c01c574fed426e3c99003b6af0410f0911d57939d5a docker.io/ollivier/clearwater-homer:latest],SizeBytes:343984685,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:f365f3b72267bef0fd696e4a93c0f3c19fb65ad42a8850fe22873dbadd03fdba docker.io/ollivier/clearwater-astaire:latest],SizeBytes:326777758,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:eb98596100b1553c9814b6185863ec53e743eb0370faeeafe16fc1dfe8d02ec3 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303283801,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:44590682de48854faeccc1f4c7de39cb666014a0c4e3abd93adcccad3208a6e2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298307172,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:0b3c89ab451b09e347657d5f85ed99d47ec3e8689b98916af72b23576926b08d docker.io/ollivier/clearwater-homestead:latest],SizeBytes:294847386,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:20069a8d9f366dd0f003afa7c4fbcbcd5e9d2b99abae83540c6538fc7cff6b97 docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287124270,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:8ddcfa68c82ebf0b4ce6add019a8f57c024aec453f47a37017cf7dff8680268a docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285184449,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.2],SizeBytes:146648881,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.2],SizeBytes:132860030,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.2],SizeBytes:132826433,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:5a7b70d343cfaeff79f6e6a8f473983a5eb7ca52f723aa8aa226aad4ee5b96e3 docker.io/aquasec/kube-hunter:latest],SizeBytes:125323634,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:795d89480038d62363491066edd962a3f0042c338d4d9feb3f4db23ac659fb40],SizeBytes:124499152,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.2],SizeBytes:113095985,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:c2efaddff058c146b93517d06a3a8066b6e88fecdd98fa6847cb69db22555f04 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:46948523,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9 us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13],SizeBytes:45704260,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12],SizeBytes:45599269,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:1e2b01ec091289327cd7e1b527c11b95db710ace489c9bd665c0d771c0225729],SizeBytes:8039938,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:9d86125c0409a16346857dbda530cf29583c87f186281745f539c12e3dcd38a7 docker.io/aquasec/kube-bench:latest],SizeBytes:8039918,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:ee55386ef35bea93a3a0900fd714038bebd156e0448addf839f38093dbbaace9],SizeBytes:8029111,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:a8cf7ff6367c2afa2a90acd081b484cbded349a7076e7bdf37a05279f276bc12],SizeBytes:764955,},ContainerImage{Names:[docker.io/library/busybox@sha256:836945da1f3afe2cfff376d379852bbb82e0237cb2925d53a13f53d6e8a8c48c docker.io/library/busybox:latest],SizeBytes:764948,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 14 12:40:23.191: INFO: 
Logging kubelet events for node kali-worker
May 14 12:40:23.193: INFO: 
Logging pods the kubelet thinks is on node kali-worker
May 14 12:40:23.210: INFO: kindnet-f8plf started at 2020-04-29 09:31:40 +0000 UTC (0+1 container statuses recorded)
May 14 12:40:23.210: INFO: 	Container kindnet-cni ready: true, restart count 1
May 14 12:40:23.210: INFO: kube-proxy-vrswj started at 2020-04-29 09:31:40 +0000 UTC (0+1 container statuses recorded)
May 14 12:40:23.210: INFO: 	Container kube-proxy ready: true, restart count 0
W0514 12:40:23.217456       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May 14 12:40:23.250: INFO: 
Latency metrics for node kali-worker
May 14 12:40:23.250: INFO: 
Logging node info for node kali-worker2
May 14 12:40:23.253: INFO: Node Info: &Node{ObjectMeta:{kali-worker2   /api/v1/nodes/kali-worker2 6eb4ebcc-ce4f-4a4d-bd7f-5f7e293c044e 4294420 0 2020-04-29 09:31:36 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kali-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2020-04-29 09:31:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 117 98 101 97 100 109 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 114 105 45 115 111 99 107 101 116 34 58 123 125 125 125 125],}} {kube-controller-manager Update v1 2020-04-29 09:32:06 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 111 100 101 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 116 116 108 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 100 67 73 68 82 34 58 123 125 44 34 102 58 112 111 100 67 73 68 82 115 34 58 123 34 46 34 58 123 125 44 34 118 58 92 34 49 48 46 50 52 52 46 49 46 48 47 50 52 92 34 34 58 123 125 125 125 125],}} {kubelet Update v1 2020-05-14 12:35:55 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 104 111 115 116 110 97 109 101 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 100 100 114 101 115 115 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 72 111 115 116 110 97 109 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 116 101 114 110 97 108 73 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 97 108 108 111 99 97 116 97 98 108 101 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 68 105 115 107 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 77 101 109 111 114 121 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 73 68 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 100 97 101 109 111 110 69 110 100 112 111 105 110 116 115 34 58 123 34 102 58 107 117 98 101 108 101 116 69 110 100 112 111 105 110 116 34 58 123 34 102 58 80 111 114 116 34 58 123 125 125 125 44 34 102 58 105 109 97 103 101 115 34 58 123 125 44 34 102 58 110 111 100 101 73 110 102 111 34 58 123 34 102 58 97 114 99 104 105 116 101 99 116 117 114 101 34 58 123 125 44 34 102 58 98 111 111 116 73 68 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 82 117 110 116 105 109 101 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 101 114 110 101 108 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 80 114 111 120 121 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 108 101 116 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 109 97 99 104 105 110 101 73 68 34 58 123 125 44 34 102 58 111 112 101 114 97 116 105 110 103 83 121 115 116 101 109 34 58 123 125 44 34 102 58 111 115 73 109 97 103 101 34 58 123 125 44 34 102 58 115 121 115 116 101 109 85 85 73 68 34 58 123 125 125 125 125],}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-05-14 12:35:55 +0000 UTC,LastTransitionTime:2020-04-29 09:31:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-05-14 12:35:55 +0000 UTC,LastTransitionTime:2020-04-29 09:31:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-05-14 12:35:55 +0000 UTC,LastTransitionTime:2020-04-29 09:31:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-05-14 12:35:55 +0000 UTC,LastTransitionTime:2020-04-29 09:32:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.18,},NodeAddress{Type:Hostname,Address:kali-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e6c808dc84074a009430113a4db25a88,SystemUUID:a7f2e4d4-2bac-4d1a-b10e-f9b7d6d56664,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.18.2,KubeProxyVersion:v1.18.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:07e93f55decdc1224fb8d161edb5617d58e3488c1250168337548ccc3e82f6b7 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386164043,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:141a336f17eaf068dbe8da4b01a832033aed5c09e7fa6349ec091ee30b76c9b1 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360403156,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:8c84761d2d906e344bc6a85a11451d35696cf684305555611df16ce2615ac816 docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351094667,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:19c6d11d2678c44822f07c01c574fed426e3c99003b6af0410f0911d57939d5a docker.io/ollivier/clearwater-homer:latest],SizeBytes:343984685,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:f365f3b72267bef0fd696e4a93c0f3c19fb65ad42a8850fe22873dbadd03fdba docker.io/ollivier/clearwater-astaire:latest],SizeBytes:326777758,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:eb98596100b1553c9814b6185863ec53e743eb0370faeeafe16fc1dfe8d02ec3 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303283801,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:44590682de48854faeccc1f4c7de39cb666014a0c4e3abd93adcccad3208a6e2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298307172,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:0b3c89ab451b09e347657d5f85ed99d47ec3e8689b98916af72b23576926b08d docker.io/ollivier/clearwater-homestead:latest],SizeBytes:294847386,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:20069a8d9f366dd0f003afa7c4fbcbcd5e9d2b99abae83540c6538fc7cff6b97 docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287124270,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:8ddcfa68c82ebf0b4ce6add019a8f57c024aec453f47a37017cf7dff8680268a docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285184449,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.2],SizeBytes:146648881,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.2],SizeBytes:132860030,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.2],SizeBytes:132826433,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:5a7b70d343cfaeff79f6e6a8f473983a5eb7ca52f723aa8aa226aad4ee5b96e3 docker.io/aquasec/kube-hunter:latest],SizeBytes:125323634,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:795d89480038d62363491066edd962a3f0042c338d4d9feb3f4db23ac659fb40],SizeBytes:124499152,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.2],SizeBytes:113095985,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12f377200949c25fde1e54bba639d34d119edd7cfcfb1d117526dba677c03c85 k8s.gcr.io/etcd:3.4.7],SizeBytes:104221097,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:c2efaddff058c146b93517d06a3a8066b6e88fecdd98fa6847cb69db22555f04 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:46948523,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9 us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13],SizeBytes:45704260,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12],SizeBytes:45599269,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:1e2b01ec091289327cd7e1b527c11b95db710ace489c9bd665c0d771c0225729],SizeBytes:8039938,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:9d86125c0409a16346857dbda530cf29583c87f186281745f539c12e3dcd38a7 docker.io/aquasec/kube-bench:latest],SizeBytes:8039918,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:a8cf7ff6367c2afa2a90acd081b484cbded349a7076e7bdf37a05279f276bc12],SizeBytes:764955,},ContainerImage{Names:[docker.io/library/busybox@sha256:836945da1f3afe2cfff376d379852bbb82e0237cb2925d53a13f53d6e8a8c48c docker.io/library/busybox:latest],SizeBytes:764948,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 14 12:40:23.253: INFO: 
Logging kubelet events for node kali-worker2
May 14 12:40:23.256: INFO: 
Logging pods the kubelet thinks is on node kali-worker2
May 14 12:40:23.261: INFO: kube-proxy-mmnb6 started at 2020-04-29 09:31:40 +0000 UTC (0+1 container statuses recorded)
May 14 12:40:23.261: INFO: 	Container kube-proxy ready: true, restart count 0
May 14 12:40:23.261: INFO: kindnet-mcdh2 started at 2020-04-29 09:31:40 +0000 UTC (0+1 container statuses recorded)
May 14 12:40:23.261: INFO: 	Container kindnet-cni ready: true, restart count 0
W0514 12:40:23.264395       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May 14 12:40:23.299: INFO: 
Latency metrics for node kali-worker2
May 14 12:40:23.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8033" for this suite.

• Failure [273.464 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703

  May 14 12:40:20.488: Unexpected error:
      <*errors.errorString | 0xc0024a2b70>: {
          s: "expected pod \"downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb\" success: pod \"downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 12:35:53 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 12:35:53 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 12:35:53 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 12:35:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.18 PodIP:10.244.1.162 PodIPs:[{IP:10.244.1.162}] StartTime:2020-05-14 12:35:53 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:client-container State:{Waiting:&ContainerStateWaiting{Reason:CreateContainerError,Message:failed to reserve container name \"client-container_downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb_projected-8033_07dbd6d7-a3e1-4a80-8029-7d049280269b_0\": name \"client-container_downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb_projected-8033_07dbd6d7-a3e1-4a80-8029-7d049280269b_0\" is reserved for \"d5ea060bb2a3ffd6391191979974d1b5a101b5b0913ad0804b4c6cdef71749da\",} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:,Message:,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:containerd://d5ea060bb2a3ffd6391191979974d1b5a101b5b0913ad0804b4c6cdef71749da,}} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest:1.0 ImageID:gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 ContainerID:containerd://d5ea060bb2a3ffd6391191979974d1b5a101b5b0913ad0804b4c6cdef71749da Started:0xc0040f8e0b}] QOSClass:BestEffort EphemeralContainerStatuses:[]}",
      }
      expected pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb" success: pod "downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 12:35:53 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 12:35:53 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 12:35:53 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 12:35:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.18 PodIP:10.244.1.162 PodIPs:[{IP:10.244.1.162}] StartTime:2020-05-14 12:35:53 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:client-container State:{Waiting:&ContainerStateWaiting{Reason:CreateContainerError,Message:failed to reserve container name "client-container_downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb_projected-8033_07dbd6d7-a3e1-4a80-8029-7d049280269b_0": name "client-container_downwardapi-volume-73445c42-8333-43ae-b042-8ecc336f98eb_projected-8033_07dbd6d7-a3e1-4a80-8029-7d049280269b_0" is reserved for "d5ea060bb2a3ffd6391191979974d1b5a101b5b0913ad0804b4c6cdef71749da",} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:,Message:,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:containerd://d5ea060bb2a3ffd6391191979974d1b5a101b5b0913ad0804b4c6cdef71749da,}} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest:1.0 ImageID:gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 ContainerID:containerd://d5ea060bb2a3ffd6391191979974d1b5a101b5b0913ad0804b4c6cdef71749da Started:0xc0040f8e0b}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
  occurred

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:798
------------------------------
{"msg":"FAILED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":263,"skipped":4578,"failed":3,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:40:23.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
May 14 12:43:01.213: INFO: Successfully updated pod "pod-update-activedeadlineseconds-35a07e68-1fc4-4821-9ba4-4bf7c2176f90"
May 14 12:43:01.214: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-35a07e68-1fc4-4821-9ba4-4bf7c2176f90" in namespace "pods-4484" to be "terminated due to deadline exceeded"
May 14 12:43:02.067: INFO: Pod "pod-update-activedeadlineseconds-35a07e68-1fc4-4821-9ba4-4bf7c2176f90": Phase="Running", Reason="", readiness=false. Elapsed: 853.350899ms
May 14 12:43:04.261: INFO: Pod "pod-update-activedeadlineseconds-35a07e68-1fc4-4821-9ba4-4bf7c2176f90": Phase="Running", Reason="", readiness=false. Elapsed: 3.047854042s
May 14 12:43:06.276: INFO: Pod "pod-update-activedeadlineseconds-35a07e68-1fc4-4821-9ba4-4bf7c2176f90": Phase="Running", Reason="", readiness=false. Elapsed: 5.062157371s
May 14 12:43:08.297: INFO: Pod "pod-update-activedeadlineseconds-35a07e68-1fc4-4821-9ba4-4bf7c2176f90": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 7.083494584s
May 14 12:43:08.297: INFO: Pod "pod-update-activedeadlineseconds-35a07e68-1fc4-4821-9ba4-4bf7c2176f90" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:43:08.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4484" for this suite.

• [SLOW TEST:166.121 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":264,"skipped":4601,"failed":3,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]"]}
SSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:43:09.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Starting the proxy
May 14 12:43:10.065: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix748006133/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:43:10.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6514" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":275,"completed":265,"skipped":4604,"failed":3,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:43:10.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
May 14 12:43:27.460: INFO: Successfully updated pod "annotationupdate6a3cac21-24d7-4b61-b38d-1b75aded4265"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:43:29.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6229" for this suite.

• [SLOW TEST:19.219 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":266,"skipped":4616,"failed":3,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:43:29.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: running the image docker.io/library/httpd:2.4.38-alpine
May 14 12:43:31.379: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-4712'
May 14 12:44:14.126: INFO: stderr: ""
May 14 12:44:14.126: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
May 14 12:44:29.177: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-4712 -o json'
May 14 12:44:29.580: INFO: stderr: ""
May 14 12:44:29.580: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-05-14T12:44:14Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"managedFields\": [\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:metadata\": {\n                        \"f:labels\": {\n                            \".\": {},\n                            \"f:run\": {}\n                        }\n                    },\n                    \"f:spec\": {\n                        \"f:containers\": {\n                            \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n                                \".\": {},\n                                \"f:image\": {},\n                                \"f:imagePullPolicy\": {},\n                                \"f:name\": {},\n                                \"f:resources\": {},\n                                \"f:terminationMessagePath\": {},\n                                \"f:terminationMessagePolicy\": {}\n                            }\n                        },\n                        \"f:dnsPolicy\": {},\n                        \"f:enableServiceLinks\": {},\n                        \"f:restartPolicy\": {},\n                        \"f:schedulerName\": {},\n                        \"f:securityContext\": {},\n                        \"f:terminationGracePeriodSeconds\": {}\n                    }\n                },\n                \"manager\": \"kubectl\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-05-14T12:44:14Z\"\n            },\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:status\": {\n                        \"f:conditions\": {\n                            \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            }\n                        },\n                        \"f:containerStatuses\": {},\n                        \"f:hostIP\": {},\n                        \"f:phase\": {},\n                        \"f:podIP\": {},\n                        \"f:podIPs\": {\n                            \".\": {},\n                            \"k:{\\\"ip\\\":\\\"10.244.1.164\\\"}\": {\n                                \".\": {},\n                                \"f:ip\": {}\n                            }\n                        },\n                        \"f:startTime\": {}\n                    }\n                },\n                \"manager\": \"kubelet\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-05-14T12:44:27Z\"\n            }\n        ],\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-4712\",\n        \"resourceVersion\": \"4295549\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-4712/pods/e2e-test-httpd-pod\",\n        \"uid\": \"73beb2d6-be43-4a73-b9d3-1d5013f24a43\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-khb8c\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"kali-worker2\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-khb8c\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-khb8c\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-05-14T12:44:14Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-05-14T12:44:27Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-05-14T12:44:27Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-05-14T12:44:14Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://b4ad1f66075f65efac64a7953078ae5f36f2434a578754fec9c6da3007efa128\",\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-05-14T12:44:25Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.17.0.18\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.1.164\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.244.1.164\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-05-14T12:44:14Z\"\n    }\n}\n"
STEP: replace the image in the pod
May 14 12:44:29.580: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-4712'
May 14 12:44:29.909: INFO: stderr: ""
May 14 12:44:29.909: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
May 14 12:44:29.954: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-4712'
May 14 12:44:43.438: INFO: stderr: ""
May 14 12:44:43.438: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:44:43.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4712" for this suite.

• [SLOW TEST:73.964 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":275,"completed":267,"skipped":4641,"failed":3,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:44:43.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir volume type on tmpfs
May 14 12:44:43.734: INFO: Waiting up to 5m0s for pod "pod-6579ecca-e5be-45c1-b5a6-11282d82c338" in namespace "emptydir-1162" to be "Succeeded or Failed"
May 14 12:44:43.867: INFO: Pod "pod-6579ecca-e5be-45c1-b5a6-11282d82c338": Phase="Pending", Reason="", readiness=false. Elapsed: 132.534021ms
May 14 12:44:46.770: INFO: Pod "pod-6579ecca-e5be-45c1-b5a6-11282d82c338": Phase="Pending", Reason="", readiness=false. Elapsed: 3.035002492s
May 14 12:44:48.778: INFO: Pod "pod-6579ecca-e5be-45c1-b5a6-11282d82c338": Phase="Pending", Reason="", readiness=false. Elapsed: 5.043527926s
May 14 12:44:50.915: INFO: Pod "pod-6579ecca-e5be-45c1-b5a6-11282d82c338": Phase="Pending", Reason="", readiness=false. Elapsed: 7.180348607s
May 14 12:44:53.053: INFO: Pod "pod-6579ecca-e5be-45c1-b5a6-11282d82c338": Phase="Pending", Reason="", readiness=false. Elapsed: 9.318552417s
May 14 12:44:55.056: INFO: Pod "pod-6579ecca-e5be-45c1-b5a6-11282d82c338": Phase="Pending", Reason="", readiness=false. Elapsed: 11.321977149s
May 14 12:44:57.514: INFO: Pod "pod-6579ecca-e5be-45c1-b5a6-11282d82c338": Phase="Pending", Reason="", readiness=false. Elapsed: 13.779042943s
May 14 12:44:59.516: INFO: Pod "pod-6579ecca-e5be-45c1-b5a6-11282d82c338": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.781728801s
STEP: Saw pod success
May 14 12:44:59.516: INFO: Pod "pod-6579ecca-e5be-45c1-b5a6-11282d82c338" satisfied condition "Succeeded or Failed"
May 14 12:44:59.518: INFO: Trying to get logs from node kali-worker2 pod pod-6579ecca-e5be-45c1-b5a6-11282d82c338 container test-container: 
STEP: delete the pod
May 14 12:44:59.562: INFO: Waiting for pod pod-6579ecca-e5be-45c1-b5a6-11282d82c338 to disappear
May 14 12:44:59.663: INFO: Pod pod-6579ecca-e5be-45c1-b5a6-11282d82c338 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:44:59.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1162" for this suite.

• [SLOW TEST:16.216 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":268,"skipped":4664,"failed":3,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:44:59.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on tmpfs
May 14 12:45:00.335: INFO: Waiting up to 5m0s for pod "pod-4c65677b-d6f5-409f-ae0b-607fc5cc4d1d" in namespace "emptydir-357" to be "Succeeded or Failed"
May 14 12:45:00.386: INFO: Pod "pod-4c65677b-d6f5-409f-ae0b-607fc5cc4d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 51.33747ms
May 14 12:45:02.562: INFO: Pod "pod-4c65677b-d6f5-409f-ae0b-607fc5cc4d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22725936s
May 14 12:45:04.742: INFO: Pod "pod-4c65677b-d6f5-409f-ae0b-607fc5cc4d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.40665926s
May 14 12:45:07.135: INFO: Pod "pod-4c65677b-d6f5-409f-ae0b-607fc5cc4d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.80046966s
May 14 12:45:09.169: INFO: Pod "pod-4c65677b-d6f5-409f-ae0b-607fc5cc4d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.834153062s
May 14 12:45:12.203: INFO: Pod "pod-4c65677b-d6f5-409f-ae0b-607fc5cc4d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.867903726s
May 14 12:45:14.551: INFO: Pod "pod-4c65677b-d6f5-409f-ae0b-607fc5cc4d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.216201251s
May 14 12:45:17.814: INFO: Pod "pod-4c65677b-d6f5-409f-ae0b-607fc5cc4d1d": Phase="Running", Reason="", readiness=true. Elapsed: 17.478716465s
May 14 12:45:20.382: INFO: Pod "pod-4c65677b-d6f5-409f-ae0b-607fc5cc4d1d": Phase="Running", Reason="", readiness=true. Elapsed: 20.047512962s
May 14 12:45:22.386: INFO: Pod "pod-4c65677b-d6f5-409f-ae0b-607fc5cc4d1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.051498144s
STEP: Saw pod success
May 14 12:45:22.387: INFO: Pod "pod-4c65677b-d6f5-409f-ae0b-607fc5cc4d1d" satisfied condition "Succeeded or Failed"
May 14 12:45:22.390: INFO: Trying to get logs from node kali-worker pod pod-4c65677b-d6f5-409f-ae0b-607fc5cc4d1d container test-container: 
STEP: delete the pod
May 14 12:45:23.371: INFO: Waiting for pod pod-4c65677b-d6f5-409f-ae0b-607fc5cc4d1d to disappear
May 14 12:45:24.242: INFO: Pod pod-4c65677b-d6f5-409f-ae0b-607fc5cc4d1d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:45:24.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-357" for this suite.

• [SLOW TEST:27.838 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":269,"skipped":4684,"failed":3,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:45:27.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May 14 12:45:28.571: INFO: Waiting up to 5m0s for pod "downwardapi-volume-68ea2262-7baa-4ba0-8c1e-302a48f26d67" in namespace "downward-api-3602" to be "Succeeded or Failed"
May 14 12:45:28.869: INFO: Pod "downwardapi-volume-68ea2262-7baa-4ba0-8c1e-302a48f26d67": Phase="Pending", Reason="", readiness=false. Elapsed: 298.733167ms
May 14 12:45:30.873: INFO: Pod "downwardapi-volume-68ea2262-7baa-4ba0-8c1e-302a48f26d67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.302570403s
May 14 12:45:32.922: INFO: Pod "downwardapi-volume-68ea2262-7baa-4ba0-8c1e-302a48f26d67": Phase="Pending", Reason="", readiness=false. Elapsed: 4.351154211s
May 14 12:45:35.089: INFO: Pod "downwardapi-volume-68ea2262-7baa-4ba0-8c1e-302a48f26d67": Phase="Pending", Reason="", readiness=false. Elapsed: 6.518728126s
May 14 12:45:38.083: INFO: Pod "downwardapi-volume-68ea2262-7baa-4ba0-8c1e-302a48f26d67": Phase="Pending", Reason="", readiness=false. Elapsed: 9.512719447s
May 14 12:45:40.268: INFO: Pod "downwardapi-volume-68ea2262-7baa-4ba0-8c1e-302a48f26d67": Phase="Pending", Reason="", readiness=false. Elapsed: 11.697186076s
May 14 12:45:42.273: INFO: Pod "downwardapi-volume-68ea2262-7baa-4ba0-8c1e-302a48f26d67": Phase="Running", Reason="", readiness=true. Elapsed: 13.702464069s
May 14 12:45:44.276: INFO: Pod "downwardapi-volume-68ea2262-7baa-4ba0-8c1e-302a48f26d67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.705149439s
STEP: Saw pod success
May 14 12:45:44.276: INFO: Pod "downwardapi-volume-68ea2262-7baa-4ba0-8c1e-302a48f26d67" satisfied condition "Succeeded or Failed"
May 14 12:45:44.278: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-68ea2262-7baa-4ba0-8c1e-302a48f26d67 container client-container: 
STEP: delete the pod
May 14 12:45:44.334: INFO: Waiting for pod downwardapi-volume-68ea2262-7baa-4ba0-8c1e-302a48f26d67 to disappear
May 14 12:45:44.337: INFO: Pod downwardapi-volume-68ea2262-7baa-4ba0-8c1e-302a48f26d67 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:45:44.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3602" for this suite.

• [SLOW TEST:16.834 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":270,"skipped":4689,"failed":3,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:45:44.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-43eea064-5147-45b3-b82d-e7d287561e8d
STEP: Creating a pod to test consume configMaps
May 14 12:45:44.483: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8b1bf090-adec-4160-8faf-24ba037f8fb4" in namespace "projected-1953" to be "Succeeded or Failed"
May 14 12:45:44.488: INFO: Pod "pod-projected-configmaps-8b1bf090-adec-4160-8faf-24ba037f8fb4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.594238ms
May 14 12:45:47.001: INFO: Pod "pod-projected-configmaps-8b1bf090-adec-4160-8faf-24ba037f8fb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.51821362s
May 14 12:45:49.005: INFO: Pod "pod-projected-configmaps-8b1bf090-adec-4160-8faf-24ba037f8fb4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.521932086s
May 14 12:45:51.509: INFO: Pod "pod-projected-configmaps-8b1bf090-adec-4160-8faf-24ba037f8fb4": Phase="Running", Reason="", readiness=true. Elapsed: 7.026134603s
May 14 12:45:53.512: INFO: Pod "pod-projected-configmaps-8b1bf090-adec-4160-8faf-24ba037f8fb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.029490332s
STEP: Saw pod success
May 14 12:45:53.512: INFO: Pod "pod-projected-configmaps-8b1bf090-adec-4160-8faf-24ba037f8fb4" satisfied condition "Succeeded or Failed"
May 14 12:45:53.515: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-8b1bf090-adec-4160-8faf-24ba037f8fb4 container projected-configmap-volume-test: 
STEP: delete the pod
May 14 12:45:53.559: INFO: Waiting for pod pod-projected-configmaps-8b1bf090-adec-4160-8faf-24ba037f8fb4 to disappear
May 14 12:45:53.571: INFO: Pod pod-projected-configmaps-8b1bf090-adec-4160-8faf-24ba037f8fb4 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:45:53.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1953" for this suite.

• [SLOW TEST:9.297 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":271,"skipped":4693,"failed":3,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May 14 12:45:53.641: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
May 14 12:45:53.791: INFO: >>> kubeConfig: /root/.kube/config
May 14 12:45:55.807: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May 14 12:46:07.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1009" for this suite.

• [SLOW TEST:13.873 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":272,"skipped":4700,"failed":3,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSMay 14 12:46:07.515: INFO: Running AfterSuite actions on all nodes
May 14 12:46:07.515: INFO: Running AfterSuite actions on node 1
May 14 12:46:07.515: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":275,"completed":272,"skipped":4717,"failed":3,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]"]}


Summarizing 3 Failures:

[Fail] [sig-cli] Kubectl client Kubectl logs [It] should be able to retrieve and filter logs  [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1329

[Fail] [sig-network] Services [It] should serve a basic endpoint from pods  [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:761

[Fail] [sig-storage] Projected downwardAPI [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:798

Ran 275 of 4992 Specs in 6939.025 seconds
FAIL! -- 272 Passed | 3 Failed | 0 Pending | 4717 Skipped
--- FAIL: TestE2E (6939.13s)
FAIL