I0811 11:32:13.973399 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0811 11:32:13.973589 7 e2e.go:124] Starting e2e run "8c2d6a9a-828b-42e7-bcb2-130a622968b9" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1597145532 - Will randomize all specs Will run 275 of 4992 specs Aug 11 11:32:14.026: INFO: >>> kubeConfig: /root/.kube/config Aug 11 11:32:14.032: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 11 11:32:14.053: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 11 11:32:14.087: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 11 11:32:14.087: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 11 11:32:14.087: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 11 11:32:14.093: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 11 11:32:14.093: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 11 11:32:14.093: INFO: e2e test version: v1.18.5 Aug 11 11:32:14.094: INFO: kube-apiserver version: v1.18.4 Aug 11 11:32:14.094: INFO: >>> kubeConfig: /root/.kube/config Aug 11 11:32:14.098: INFO: Cluster IP family: ipv4 SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 11 11:32:14.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns Aug 11 11:32:14.345: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7689.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7689.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7689.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7689.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 11 11:32:29.091: INFO: DNS probes using dns-test-0787337a-62cf-4453-8ad9-4147d850cc5c succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7689.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7689.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7689.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7689.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 11 11:32:51.361: INFO: File wheezy_udp@dns-test-service-3.dns-7689.svc.cluster.local from pod dns-7689/dns-test-6e659956-eac8-46a7-b397-1e64351275c6 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 11 11:32:51.574: INFO: Lookups using dns-7689/dns-test-6e659956-eac8-46a7-b397-1e64351275c6 failed for: [wheezy_udp@dns-test-service-3.dns-7689.svc.cluster.local] Aug 11 11:32:56.582: INFO: File jessie_udp@dns-test-service-3.dns-7689.svc.cluster.local from pod dns-7689/dns-test-6e659956-eac8-46a7-b397-1e64351275c6 contains '' instead of 'bar.example.com.' Aug 11 11:32:56.582: INFO: Lookups using dns-7689/dns-test-6e659956-eac8-46a7-b397-1e64351275c6 failed for: [jessie_udp@dns-test-service-3.dns-7689.svc.cluster.local] Aug 11 11:33:01.583: INFO: DNS probes using dns-test-6e659956-eac8-46a7-b397-1e64351275c6 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7689.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7689.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7689.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7689.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 11 11:33:13.647: INFO: DNS probes using dns-test-b105290a-eef7-4213-b7e6-c63c6a460b56 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 11 11:33:14.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7689" for this suite. • [SLOW TEST:61.282 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":1,"skipped":8,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 11 11:33:15.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on node default medium Aug 11 11:33:16.135: INFO: Waiting up to 5m0s for pod "pod-282f05ff-4de8-4a25-b4be-50bae1473596" in namespace "emptydir-1745" to be "Succeeded or Failed" Aug 11 11:33:16.970: INFO: Pod "pod-282f05ff-4de8-4a25-b4be-50bae1473596": Phase="Pending", Reason="", readiness=false. Elapsed: 835.121056ms Aug 11 11:33:19.910: INFO: Pod "pod-282f05ff-4de8-4a25-b4be-50bae1473596": Phase="Pending", Reason="", readiness=false. Elapsed: 3.775500752s Aug 11 11:33:22.031: INFO: Pod "pod-282f05ff-4de8-4a25-b4be-50bae1473596": Phase="Pending", Reason="", readiness=false. Elapsed: 5.896414716s Aug 11 11:33:24.043: INFO: Pod "pod-282f05ff-4de8-4a25-b4be-50bae1473596": Phase="Pending", Reason="", readiness=false. Elapsed: 7.90891045s Aug 11 11:33:26.270: INFO: Pod "pod-282f05ff-4de8-4a25-b4be-50bae1473596": Phase="Pending", Reason="", readiness=false. Elapsed: 10.135288006s Aug 11 11:33:28.404: INFO: Pod "pod-282f05ff-4de8-4a25-b4be-50bae1473596": Phase="Pending", Reason="", readiness=false. Elapsed: 12.269338248s Aug 11 11:33:30.432: INFO: Pod "pod-282f05ff-4de8-4a25-b4be-50bae1473596": Phase="Pending", Reason="", readiness=false. Elapsed: 14.297033231s Aug 11 11:33:33.054: INFO: Pod "pod-282f05ff-4de8-4a25-b4be-50bae1473596": Phase="Running", Reason="", readiness=true. Elapsed: 16.919080643s Aug 11 11:33:35.071: INFO: Pod "pod-282f05ff-4de8-4a25-b4be-50bae1473596": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.936869374s STEP: Saw pod success Aug 11 11:33:35.072: INFO: Pod "pod-282f05ff-4de8-4a25-b4be-50bae1473596" satisfied condition "Succeeded or Failed" Aug 11 11:33:35.074: INFO: Trying to get logs from node kali-worker pod pod-282f05ff-4de8-4a25-b4be-50bae1473596 container test-container: STEP: delete the pod Aug 11 11:33:36.531: INFO: Waiting for pod pod-282f05ff-4de8-4a25-b4be-50bae1473596 to disappear Aug 11 11:33:36.842: INFO: Pod pod-282f05ff-4de8-4a25-b4be-50bae1473596 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 11 11:33:36.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1745" for this suite. • [SLOW TEST:22.049 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":2,"skipped":34,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 11 11:33:37.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 11 11:33:47.161: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 11 11:33:47.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1560" for this suite. • [SLOW TEST:9.915 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":3,"skipped":43,"failed":0} S ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 11 11:33:47.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 11 11:33:47.690: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 11 11:33:54.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2688" for this suite. • [SLOW TEST:7.215 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":4,"skipped":44,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 11 11:33:54.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Aug 11 11:33:57.047: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9959 /api/v1/namespaces/watch-9959/configmaps/e2e-watch-test-resource-version 1a8a4af8-ce96-4d32-bd1b-8fa7bfa6b2c0 8542964 0 2020-08-11 11:33:55 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-08-11 11:33:56 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 11 11:33:57.047: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9959 /api/v1/namespaces/watch-9959/configmaps/e2e-watch-test-resource-version 1a8a4af8-ce96-4d32-bd1b-8fa7bfa6b2c0 8542967 0 2020-08-11 11:33:55 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-08-11 11:33:56 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 11 11:33:57.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9959" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":5,"skipped":80,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 11 11:33:57.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 11 11:33:57.488: INFO: Creating deployment "webserver-deployment" Aug 11 11:33:57.493: INFO: Waiting for observed generation 1 Aug 11 11:33:59.609: INFO: Waiting for all required pods to come up Aug 11 11:33:59.615: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Aug 11 11:34:13.626: INFO: Waiting for deployment "webserver-deployment" to complete Aug 11 11:34:13.632: INFO: Updating deployment "webserver-deployment" with a non-existent image Aug 11 11:34:13.688: INFO: Updating deployment webserver-deployment Aug 11 11:34:13.688: INFO: Waiting for observed generation 2 Aug 11 11:34:15.929: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Aug 11 11:34:17.414: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Aug 11 11:34:17.417: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Aug 11 11:34:18.949: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Aug 11 11:34:18.949: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Aug 11 11:34:19.114: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Aug 11 11:34:19.119: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Aug 11 11:34:19.119: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Aug 11 11:34:19.126: INFO: Updating deployment webserver-deployment Aug 11 11:34:19.126: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Aug 11 11:34:20.241: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Aug 11 11:34:20.983: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Aug 11 11:34:24.574: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-3680 /apis/apps/v1/namespaces/deployment-3680/deployments/webserver-deployment b15f63f5-4929-41a7-8754-14c67647ade2 8543323 3 2020-08-11 11:33:57 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-08-11 11:34:19 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-11 11:34:21 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f26af8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-11 11:34:19 +0000 UTC,LastTransitionTime:2020-08-11 11:34:19 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-08-11 11:34:21 +0000 UTC,LastTransitionTime:2020-08-11 11:33:57 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Aug 11 11:34:24.983: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4 deployment-3680 /apis/apps/v1/namespaces/deployment-3680/replicasets/webserver-deployment-6676bcd6d4 cd69552e-93ed-4beb-be04-be35246bd2cc 8543316 3 2020-08-11 11:34:13 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment b15f63f5-4929-41a7-8754-14c67647ade2 0xc002f26f97 0xc002f26f98}] [] [{kube-controller-manager Update apps/v1 2020-08-11 11:34:21 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 49 53 102 54 51 102 53 45 52 57 50 57 45 52 49 97 55 45 56 55 53 52 45 49 52 99 54 55 54 52 55 97 100 101 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f27028 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 11 11:34:24.983: INFO: All old ReplicaSets of Deployment "webserver-deployment": Aug 11 11:34:24.983: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797 deployment-3680 /apis/apps/v1/namespaces/deployment-3680/replicasets/webserver-deployment-84855cf797 60323170-8a33-4f08-a901-c659825e0e4f 8543302 3 2020-08-11 11:33:57 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment b15f63f5-4929-41a7-8754-14c67647ade2 0xc002f27087 0xc002f27088}] [] [{kube-controller-manager Update apps/v1 2020-08-11 11:34:21 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 49 53 102 54 51 102 53 45 52 57 50 57 45 52 49 97 55 45 56 55 53 52 45 49 52 99 54 55 54 52 55 97 100 101 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f270f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Aug 11 11:34:25.047: INFO: Pod "webserver-deployment-6676bcd6d4-47wvk" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-47wvk webserver-deployment-6676bcd6d4- deployment-3680 /api/v1/namespaces/deployment-3680/pods/webserver-deployment-6676bcd6d4-47wvk 8f608d06-75d8-43f4-95a2-d9eb7cf477c8 8543361 0 2020-08-11 11:34:20 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 cd69552e-93ed-4beb-be04-be35246bd2cc 0xc002f27717 0xc002f27718}] [] [{kube-controller-manager Update v1 2020-08-11 11:34:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 100 54 57 53 53 50 101 45 57 51 101 100 45 52 98 101 98 45 98 101 48 52 45 98 101 51 53 50 52 54 98 100 50 99 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 11:34:23 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vxtss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vxtss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vxtss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-11 11:34:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 11:34:25.047: INFO: Pod "webserver-deployment-6676bcd6d4-498x6" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-498x6 webserver-deployment-6676bcd6d4- deployment-3680 /api/v1/namespaces/deployment-3680/pods/webserver-deployment-6676bcd6d4-498x6 896a40b6-f6da-4b06-99ab-7c98113df1e4 8543303 0 2020-08-11 11:34:20 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 cd69552e-93ed-4beb-be04-be35246bd2cc 0xc002f278c7 0xc002f278c8}] [] [{kube-controller-manager Update v1 2020-08-11 11:34:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 100 54 57 53 53 50 101 45 57 51 101 100 45 52 98 101 98 45 98 101 48 52 45 98 101 51 53 50 52 54 98 100 50 99 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vxtss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vxtss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vxtss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 11:34:25.047: INFO: Pod "webserver-deployment-6676bcd6d4-4gbmg" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-4gbmg webserver-deployment-6676bcd6d4- deployment-3680 /api/v1/namespaces/deployment-3680/pods/webserver-deployment-6676bcd6d4-4gbmg dc05c845-f866-494b-aec6-dec1e57482ea 8543368 0 2020-08-11 11:34:20 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 cd69552e-93ed-4beb-be04-be35246bd2cc 0xc002f27a07 0xc002f27a08}] [] [{kube-controller-manager Update v1 2020-08-11 11:34:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 100 54 57 53 53 50 101 45 57 51 101 100 45 52 98 101 98 45 98 101 48 52 45 98 101 51 53 50 52 54 98 100 50 99 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 11:34:24 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vxtss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vxtss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vxtss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-11 11:34:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 11:34:25.048: INFO: Pod "webserver-deployment-6676bcd6d4-4nhd8" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-4nhd8 webserver-deployment-6676bcd6d4- deployment-3680 /api/v1/namespaces/deployment-3680/pods/webserver-deployment-6676bcd6d4-4nhd8 58a511a8-6d3c-4df9-887c-8121a2b43e9a 8543338 0 2020-08-11 11:34:20 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 cd69552e-93ed-4beb-be04-be35246bd2cc 0xc002f27bb7 0xc002f27bb8}] [] [{kube-controller-manager Update v1 2020-08-11 11:34:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 100 54 57 53 53 50 101 45 57 51 101 100 45 52 98 101 98 45 98 101 48 52 45 98 101 51 53 50 52 54 98 100 50 99 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 11:34:21 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vxtss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vxtss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vxtss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-11 11:34:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 11:34:25.048: INFO: Pod "webserver-deployment-6676bcd6d4-7s8mw" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-7s8mw webserver-deployment-6676bcd6d4- deployment-3680 /api/v1/namespaces/deployment-3680/pods/webserver-deployment-6676bcd6d4-7s8mw 2c80e65f-faa3-43c5-9cf9-f7ec22a21bb7 8543300 0 2020-08-11 11:34:13 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 cd69552e-93ed-4beb-be04-be35246bd2cc 0xc002f27d67 0xc002f27d68}] [] [{kube-controller-manager Update v1 2020-08-11 11:34:13 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 100 54 57 53 53 50 101 45 57 51 101 100 45 52 98 101 98 45 98 101 48 52 45 98 101 51 53 50 52 54 98 100 50 99 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 11:34:21 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 50 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vxtss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vxtss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vxtss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.229,StartTime:2020-08-11 11:34:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.229,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 11:34:25.048: INFO: Pod "webserver-deployment-6676bcd6d4-b7fxb" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-b7fxb webserver-deployment-6676bcd6d4- deployment-3680 /api/v1/namespaces/deployment-3680/pods/webserver-deployment-6676bcd6d4-b7fxb 64bec234-c3de-466b-bada-e6f251ccc84d 8543330 0 2020-08-11 11:34:20 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 cd69552e-93ed-4beb-be04-be35246bd2cc 0xc002f27f47 0xc002f27f48}] [] [{kube-controller-manager Update v1 2020-08-11 11:34:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 100 54 57 53 53 50 101 45 57 51 101 100 45 52 98 101 98 45 98 101 48 52 45 98 101 51 53 50 52 54 98 100 50 99 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 11:34:21 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vxtss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vxtss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vxtss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-11 11:34:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 11:34:25.049: INFO: Pod "webserver-deployment-6676bcd6d4-djc9l" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-djc9l webserver-deployment-6676bcd6d4- deployment-3680 /api/v1/namespaces/deployment-3680/pods/webserver-deployment-6676bcd6d4-djc9l 5d96a425-9e1b-4ae4-86c9-c6c486113dde 8543348 0 2020-08-11 11:34:20 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 cd69552e-93ed-4beb-be04-be35246bd2cc 0xc0028c01d7 0xc0028c01d8}] [] [{kube-controller-manager Update v1 2020-08-11 11:34:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 100 54 57 53 53 50 101 45 57 51 101 100 45 52 98 101 98 45 98 101 48 52 45 98 101 51 53 50 52 54 98 100 50 99 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 11:34:22 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vxtss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vxtss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vxtss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-11 11:34:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 11:34:25.049: INFO: Pod "webserver-deployment-6676bcd6d4-j4ht2" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-j4ht2 webserver-deployment-6676bcd6d4- deployment-3680 /api/v1/namespaces/deployment-3680/pods/webserver-deployment-6676bcd6d4-j4ht2 811666d3-0762-4457-a893-cc74b366735a 8543222 0 2020-08-11 11:34:14 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 cd69552e-93ed-4beb-be04-be35246bd2cc 0xc0028c0387 0xc0028c0388}] [] [{kube-controller-manager Update v1 2020-08-11 11:34:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 100 54 57 53 53 50 101 45 57 51 101 100 45 52 98 101 98 45 98 101 48 52 45 98 101 51 53 50 52 54 98 100 50 99 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 11:34:16 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vxtss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vxtss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vxtss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-11 11:34:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 11:34:25.049: INFO: Pod "webserver-deployment-6676bcd6d4-l5bt5" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-l5bt5 webserver-deployment-6676bcd6d4- deployment-3680 /api/v1/namespaces/deployment-3680/pods/webserver-deployment-6676bcd6d4-l5bt5 1b5d107e-7e24-4688-bec1-f6451ef358a6 8543211 0 2020-08-11 11:34:13 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 cd69552e-93ed-4beb-be04-be35246bd2cc 0xc0028c0627 0xc0028c0628}] [] [{kube-controller-manager Update v1 2020-08-11 11:34:13 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 100 54 57 53 53 50 101 45 57 51 101 100 45 52 98 101 98 45 98 101 48 52 45 98 101 51 53 50 52 54 98 100 50 99 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 11:34:14 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vxtss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vxtss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vxtss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-11 11:34:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 11:34:25.049: INFO: Pod "webserver-deployment-6676bcd6d4-ntq54" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-ntq54 webserver-deployment-6676bcd6d4- deployment-3680 /api/v1/namespaces/deployment-3680/pods/webserver-deployment-6676bcd6d4-ntq54 49ace268-e7a6-48a3-a1bf-2dd8bded6443 8543324 0 2020-08-11 11:34:19 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 cd69552e-93ed-4beb-be04-be35246bd2cc 0xc0028c07e7 0xc0028c07e8}] [] [{kube-controller-manager Update v1 2020-08-11 11:34:19 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 100 54 57 53 53 50 101 45 57 51 101 100 45 52 98 101 98 45 98 101 48 52 45 98 101 51 53 50 52 54 98 100 50 99 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 11:34:21 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vxtss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vxtss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vxtss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-11 11:34:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 11:34:25.050: INFO: Pod "webserver-deployment-6676bcd6d4-pb9bs" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-pb9bs webserver-deployment-6676bcd6d4- deployment-3680 /api/v1/namespaces/deployment-3680/pods/webserver-deployment-6676bcd6d4-pb9bs 036e27d2-c69d-46c1-acb9-e4e1403b7d69 8543215 0 2020-08-11 11:34:14 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 cd69552e-93ed-4beb-be04-be35246bd2cc 0xc0028c0997 0xc0028c0998}] [] [{kube-controller-manager Update v1 2020-08-11 11:34:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 100 54 57 53 53 50 101 45 57 51 101 100 45 52 98 101 98 45 98 101 48 52 45 98 101 51 53 50 52 54 98 100 50 99 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 11:34:15 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vxtss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vxtss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vxtss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-11 11:34:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 11:34:25.050: INFO: Pod "webserver-deployment-6676bcd6d4-tsk5b" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-tsk5b webserver-deployment-6676bcd6d4- deployment-3680 /api/v1/namespaces/deployment-3680/pods/webserver-deployment-6676bcd6d4-tsk5b 83e298da-048f-4f6b-a35d-ee1b34e9983f 8543344 0 2020-08-11 11:34:20 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 cd69552e-93ed-4beb-be04-be35246bd2cc 0xc0028c0b47 0xc0028c0b48}] [] [{kube-controller-manager Update v1 2020-08-11 11:34:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 100 54 57 53 53 50 101 45 57 51 101 100 45 52 98 101 98 45 98 101 48 52 45 98 101 51 53 50 52 54 98 100 50 99 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 11:34:22 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vxtss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vxtss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vxtss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-11 11:34:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 11:34:25.050: INFO: Pod "webserver-deployment-6676bcd6d4-wth6w" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-wth6w webserver-deployment-6676bcd6d4- deployment-3680 /api/v1/namespaces/deployment-3680/pods/webserver-deployment-6676bcd6d4-wth6w 34affaa5-d60b-460a-a9b6-2a4f6ac0e162 8543196 0 2020-08-11 11:34:13 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 cd69552e-93ed-4beb-be04-be35246bd2cc 0xc0028c0cf7 0xc0028c0cf8}] [] [{kube-controller-manager Update v1 2020-08-11 11:34:13 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 100 54 57 53 53 50 101 45 57 51 101 100 45 52 98 101 98 45 98 101 48 52 45 98 101 51 53 50 52 54 98 100 50 99 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 11:34:14 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vxtss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vxtss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vxtss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-11 11:34:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 11:34:25.050: INFO: Pod "webserver-deployment-84855cf797-22x45" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-22x45 webserver-deployment-84855cf797- deployment-3680 /api/v1/namespaces/deployment-3680/pods/webserver-deployment-84855cf797-22x45 87f0251d-5ca6-48ba-8a3c-d77edab7772e 8543119 0 2020-08-11 11:33:57 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 60323170-8a33-4f08-a901-c659825e0e4f 0xc0028c0ea7 0xc0028c0ea8}] [] [{kube-controller-manager Update v1 2020-08-11 11:33:57 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 48 51 50 51 49 55 48 45 56 97 51 51 45 52 102 48 56 45 97 57 48 49 45 99 54 53 57 56 50 53 101 48 101 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 11:34:10 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 50 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vxtss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vxtss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vxtss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:33:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:33:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.226,StartTime:2020-08-11 11:33:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-11 11:34:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3a3bf9d103c60ada6e29c381266c9d36527eb0e86dd21096faaa831c92e0b7cd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.226,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 11:34:25.050: INFO: Pod "webserver-deployment-84855cf797-47vmv" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-47vmv webserver-deployment-84855cf797- deployment-3680 /api/v1/namespaces/deployment-3680/pods/webserver-deployment-84855cf797-47vmv 0dc99d43-de57-4cd7-a391-31aabcd9a256 8543356 0 2020-08-11 11:34:20 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 60323170-8a33-4f08-a901-c659825e0e4f 0xc0028c1077 0xc0028c1078}] [] [{kube-controller-manager Update v1 2020-08-11 11:34:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 48 51 50 51 49 55 48 45 56 97 51 51 45 52 102 48 56 45 97 57 48 49 45 99 54 53 57 56 50 53 101 48 101 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 11:34:23 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vxtss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vxtss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vxtss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-11 11:34:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 11:34:25.051: INFO: Pod "webserver-deployment-84855cf797-4b5s5" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-4b5s5 webserver-deployment-84855cf797- deployment-3680 /api/v1/namespaces/deployment-3680/pods/webserver-deployment-84855cf797-4b5s5 fb39137d-6719-4beb-98fd-8bdfeb3aa765 8543081 0 2020-08-11 11:33:57 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 60323170-8a33-4f08-a901-c659825e0e4f 0xc0028c1207 0xc0028c1208}] [] [{kube-controller-manager Update v1 2020-08-11 11:33:57 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 48 51 50 51 49 55 48 45 56 97 51 51 45 52 102 48 56 45 97 57 48 49 45 99 54 53 57 56 50 53 101 48 101 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 11:34:03 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 50 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vxtss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vxtss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vxtss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:33:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:33:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.225,StartTime:2020-08-11 11:33:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-11 11:34:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f9f526ab163f80bdd34b5c97239061ca2ccda353f7e254ab607002b0e557b3af,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.225,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 11:34:25.051: INFO: Pod "webserver-deployment-84855cf797-4d6gv" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-4d6gv webserver-deployment-84855cf797- deployment-3680 /api/v1/namespaces/deployment-3680/pods/webserver-deployment-84855cf797-4d6gv 50517c95-8442-46bd-abef-d664328271d8 8543333 0 2020-08-11 11:34:20 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 60323170-8a33-4f08-a901-c659825e0e4f 0xc0028c13b7 0xc0028c13b8}] [] [{kube-controller-manager Update v1 2020-08-11 11:34:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 48 51 50 51 49 55 48 45 56 97 51 51 45 52 102 48 56 45 97 57 48 49 45 99 54 53 57 56 50 53 101 48 101 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 11:34:21 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vxtss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vxtss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vxtss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-11 11:34:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 11:34:25.051: INFO: Pod "webserver-deployment-84855cf797-58cml" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-58cml webserver-deployment-84855cf797- deployment-3680 /api/v1/namespaces/deployment-3680/pods/webserver-deployment-84855cf797-58cml 49a42617-b409-4d67-9ac9-0c8490bdb9b6 8543326 0 2020-08-11 11:34:20 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 60323170-8a33-4f08-a901-c659825e0e4f 0xc0028c1547 0xc0028c1548}] [] [{kube-controller-manager Update v1 2020-08-11 11:34:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 48 51 50 51 49 55 48 45 56 97 51 51 45 52 102 48 56 45 97 57 48 49 45 99 54 53 57 56 50 53 101 48 101 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 11:34:21 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vxtss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vxtss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vxtss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-11 11:34:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 11:34:25.051: INFO: Pod "webserver-deployment-84855cf797-9sbwd" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-9sbwd webserver-deployment-84855cf797- deployment-3680 /api/v1/namespaces/deployment-3680/pods/webserver-deployment-84855cf797-9sbwd b0e16df6-ea74-4caa-9c3f-d44b02dee4ed 8543301 0 2020-08-11 11:34:19 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 60323170-8a33-4f08-a901-c659825e0e4f 0xc0028c16d7 0xc0028c16d8}] [] [{kube-controller-manager Update v1 2020-08-11 11:34:19 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 48 51 50 51 49 55 48 45 56 97 51 51 45 52 102 48 56 45 97 57 48 49 45 99 54 53 57 56 50 53 101 48 101 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 11:34:21 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vxtss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vxtss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vxtss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-11 11:34:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 11:34:25.051: INFO: Pod "webserver-deployment-84855cf797-bb5qb" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-bb5qb webserver-deployment-84855cf797- deployment-3680 /api/v1/namespaces/deployment-3680/pods/webserver-deployment-84855cf797-bb5qb adddf1f5-d1d6-4cde-b742-0449d08bb236 8543097 0 2020-08-11 11:33:57 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 60323170-8a33-4f08-a901-c659825e0e4f 0xc0028c1867 0xc0028c1868}] [] [{kube-controller-manager Update v1 2020-08-11 11:33:57 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 48 51 50 51 49 55 48 45 56 97 51 51 45 52 102 48 56 45 97 57 48 49 45 99 54 53 57 56 50 53 101 48 101 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 11:34:06 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 49 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vxtss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vxtss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vxtss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:33:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:33:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.215,StartTime:2020-08-11 11:33:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-11 11:34:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9c415fc6b095c53455ae41c43af6d9561f77b22b5732bf49c9ca1830e23218eb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.215,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 11:34:25.052: INFO: Pod "webserver-deployment-84855cf797-bswjx" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-bswjx webserver-deployment-84855cf797- deployment-3680 /api/v1/namespaces/deployment-3680/pods/webserver-deployment-84855cf797-bswjx 139eba08-9e60-4a87-906d-a694df832731 8543352 0 2020-08-11 11:34:20 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 60323170-8a33-4f08-a901-c659825e0e4f 0xc0028c1a17 0xc0028c1a18}] [] [{kube-controller-manager Update v1 2020-08-11 11:34:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 48 51 50 51 49 55 48 45 56 97 51 51 45 52 102 48 56 45 97 57 48 49 45 99 54 53 57 56 50 53 101 48 101 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 11:34:22 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vxtss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vxtss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vxtss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-11 11:34:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 11:34:25.052: INFO: Pod "webserver-deployment-84855cf797-h4qgj" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-h4qgj webserver-deployment-84855cf797- deployment-3680 /api/v1/namespaces/deployment-3680/pods/webserver-deployment-84855cf797-h4qgj 1637713f-5ec7-42bd-907c-f96ff3a799ef 8543318 0 2020-08-11 11:34:19 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 60323170-8a33-4f08-a901-c659825e0e4f 0xc0028c1ba7 0xc0028c1ba8}] [] [{kube-controller-manager Update v1 2020-08-11 11:34:19 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 48 51 50 51 49 55 48 45 56 97 51 51 45 52 102 48 56 45 97 57 48 49 45 99 54 53 57 56 50 53 101 48 101 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 11:34:21 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vxtss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vxtss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vxtss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-11 11:34:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 11:34:25.052: INFO: Pod "webserver-deployment-84855cf797-hp89m" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-hp89m webserver-deployment-84855cf797- deployment-3680 /api/v1/namespaces/deployment-3680/pods/webserver-deployment-84855cf797-hp89m 93edae6e-cec3-45d2-80cb-97daf4d7b78f 8543130 0 2020-08-11 11:33:57 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 60323170-8a33-4f08-a901-c659825e0e4f 0xc0028c1d37 0xc0028c1d38}] [] [{kube-controller-manager Update v1 2020-08-11 11:33:57 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 48 51 50 51 49 55 48 45 56 97 51 51 45 52 102 48 56 45 97 57 48 49 45 99 54 53 57 56 50 53 101 48 101 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 11:34:11 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 50 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vxtss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vxtss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vxtss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:33:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:33:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.228,StartTime:2020-08-11 11:33:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-11 11:34:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d7bad0e0440774af301db973749017fe047ebbb524ecd87045b45323778a672c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.228,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 11:34:25.053: INFO: Pod "webserver-deployment-84855cf797-jqc4p" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-jqc4p webserver-deployment-84855cf797- deployment-3680 /api/v1/namespaces/deployment-3680/pods/webserver-deployment-84855cf797-jqc4p ba617519-be53-47e7-8a40-3d3d65570e36 8543145 0 2020-08-11 11:33:57 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 60323170-8a33-4f08-a901-c659825e0e4f 0xc0028c1ef7 0xc0028c1ef8}] [] [{kube-controller-manager Update v1 2020-08-11 11:33:57 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 48 51 50 51 49 55 48 45 56 97 51 51 45 52 102 48 56 45 97 57 48 49 45 99 54 53 57 56 50 53 101 48 101 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 11:34:12 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 49 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vxtss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vxtss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vxtss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:33:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:33:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.219,StartTime:2020-08-11 11:33:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-11 11:34:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0b2f151f8d66f9eaab64ccbe69a92f1e36c718bfaf0e522575fb877677b8a279,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.219,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 11:34:25.053: INFO: Pod "webserver-deployment-84855cf797-k6f82" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-k6f82 webserver-deployment-84855cf797- deployment-3680 /api/v1/namespaces/deployment-3680/pods/webserver-deployment-84855cf797-k6f82 639af07e-4e0b-4f06-b22d-bd23fcf48e83 8543342 0 2020-08-11 11:34:20 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 60323170-8a33-4f08-a901-c659825e0e4f 0xc0030440c7 0xc0030440c8}] [] [{kube-controller-manager Update v1 2020-08-11 11:34:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 48 51 50 51 49 55 48 45 56 97 51 51 45 52 102 48 56 45 97 57 48 49 45 99 54 53 57 56 50 53 101 48 101 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 11:34:22 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vxtss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vxtss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vxtss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-11 11:34:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 11:34:25.053: INFO: Pod "webserver-deployment-84855cf797-m6js9" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-m6js9 webserver-deployment-84855cf797- deployment-3680 /api/v1/namespaces/deployment-3680/pods/webserver-deployment-84855cf797-m6js9 91305da7-76ed-4039-a012-921dc1c05bd6 8543115 0 2020-08-11 11:33:57 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 60323170-8a33-4f08-a901-c659825e0e4f 0xc003044257 0xc003044258}] [] [{kube-controller-manager Update v1 2020-08-11 11:33:57 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 48 51 50 51 49 55 48 45 56 97 51 51 45 52 102 48 56 45 97 57 48 49 45 99 54 53 57 56 50 53 101 48 101 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 11:34:09 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 50 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vxtss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vxtss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vxtss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:33:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:33:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.227,StartTime:2020-08-11 11:33:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-11 11:34:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://70ce99240d1a741a0dda17b09a44d5822fc0ee5b73180d9dd080c5c819904a01,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.227,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 11:34:25.053: INFO: Pod "webserver-deployment-84855cf797-mm52m" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-mm52m webserver-deployment-84855cf797- deployment-3680 /api/v1/namespaces/deployment-3680/pods/webserver-deployment-84855cf797-mm52m a02678ff-d1bd-481f-85f7-8948b78fccf0 8543296 0 2020-08-11 11:34:20 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 60323170-8a33-4f08-a901-c659825e0e4f 0xc003044427 0xc003044428}] [] [{kube-controller-manager Update v1 2020-08-11 11:34:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 48 51 50 51 49 55 48 45 56 97 51 51 45 52 102 48 56 45 97 57 48 49 45 99 54 53 57 56 50 53 101 48 101 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vxtss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vxtss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vxtss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 11:34:25.053: INFO: Pod "webserver-deployment-84855cf797-plksg" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-plksg webserver-deployment-84855cf797- deployment-3680 /api/v1/namespaces/deployment-3680/pods/webserver-deployment-84855cf797-plksg b402e15b-f8bc-4caf-a4f2-c35466b9ca23 8543341 0 2020-08-11 11:34:20 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 60323170-8a33-4f08-a901-c659825e0e4f 0xc003044557 0xc003044558}] [] [{kube-controller-manager Update v1 2020-08-11 11:34:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 48 51 50 51 49 55 48 45 56 97 51 51 45 52 102 48 56 45 97 57 48 49 45 99 54 53 57 56 50 53 101 48 101 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 11:34:21 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vxtss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vxtss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vxtss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-11 11:34:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 11:34:25.054: INFO: Pod "webserver-deployment-84855cf797-v5bqw" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-v5bqw webserver-deployment-84855cf797- deployment-3680 /api/v1/namespaces/deployment-3680/pods/webserver-deployment-84855cf797-v5bqw ea0fc056-ff5e-45ae-af62-515bf3b093d0 8543336 0 2020-08-11 11:34:20 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 60323170-8a33-4f08-a901-c659825e0e4f 0xc0030446e7 0xc0030446e8}] [] [{kube-controller-manager Update v1 2020-08-11 11:34:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 48 51 50 51 49 55 48 45 56 97 51 51 45 52 102 48 56 45 97 57 48 49 45 99 54 53 57 56 50 53 101 48 101 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 11:34:21 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vxtss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vxtss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vxtss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-11 11:34:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 11:34:25.054: INFO: Pod "webserver-deployment-84855cf797-v9n9f" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-v9n9f webserver-deployment-84855cf797- deployment-3680 /api/v1/namespaces/deployment-3680/pods/webserver-deployment-84855cf797-v9n9f c8f275f5-4ad1-4efd-8265-6cb214e564eb 8543141 0 2020-08-11 11:33:57 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 60323170-8a33-4f08-a901-c659825e0e4f 0xc003044877 0xc003044878}] [] [{kube-controller-manager Update v1 2020-08-11 11:33:57 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 48 51 50 51 49 55 48 45 56 97 51 51 45 52 102 48 56 45 97 57 48 49 45 99 54 53 57 56 50 53 101 48 101 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 11:34:11 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 49 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vxtss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vxtss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vxtss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:33:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:33:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.218,StartTime:2020-08-11 11:33:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-11 11:34:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://85e0377381c2e5b3f0deef34669b5370a39261f4190f2ad09863363517f2a4da,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.218,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 11:34:25.054: INFO: Pod "webserver-deployment-84855cf797-wrnhq" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-wrnhq webserver-deployment-84855cf797- deployment-3680 /api/v1/namespaces/deployment-3680/pods/webserver-deployment-84855cf797-wrnhq 38c413d1-a2df-4219-965e-46e5b928e346 8543132 0 2020-08-11 11:33:57 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 60323170-8a33-4f08-a901-c659825e0e4f 0xc003044a37 0xc003044a38}] [] [{kube-controller-manager Update v1 2020-08-11 11:33:57 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 48 51 50 51 49 55 48 45 56 97 51 51 45 52 102 48 56 45 97 57 48 49 45 99 54 53 57 56 50 53 101 48 101 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 11:34:11 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 49 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vxtss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vxtss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vxtss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:33:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:33:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.216,StartTime:2020-08-11 11:33:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-11 11:34:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6147645e3a0f4884fc68a8b6bd5c75e2dec011628307de7d859a80481265ddda,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.216,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 11:34:25.054: INFO: Pod "webserver-deployment-84855cf797-xqzf5" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-xqzf5 webserver-deployment-84855cf797- deployment-3680 /api/v1/namespaces/deployment-3680/pods/webserver-deployment-84855cf797-xqzf5 dc83768d-ea4a-4639-a96b-cfbc669babc9 8543320 0 2020-08-11 11:34:19 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 60323170-8a33-4f08-a901-c659825e0e4f 0xc003044be7 0xc003044be8}] [] [{kube-controller-manager Update v1 2020-08-11 11:34:19 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 48 51 50 51 49 55 48 45 56 97 51 51 45 52 102 48 56 45 97 57 48 49 45 99 54 53 57 56 50 53 101 48 101 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 11:34:21 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vxtss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vxtss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vxtss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-11 11:34:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 11:34:25.054: INFO: Pod "webserver-deployment-84855cf797-z6cvj" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-z6cvj webserver-deployment-84855cf797- deployment-3680 /api/v1/namespaces/deployment-3680/pods/webserver-deployment-84855cf797-z6cvj bff586a3-0614-4b7e-97c3-44539f1fda10 8543334 0 2020-08-11 11:34:20 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 60323170-8a33-4f08-a901-c659825e0e4f 0xc003044d77 0xc003044d78}] [] [{kube-controller-manager Update v1 2020-08-11 11:34:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 48 51 50 51 49 55 48 45 56 97 51 51 45 52 102 48 56 45 97 57 48 49 45 99 54 53 57 56 50 53 101 48 101 52 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 11:34:21 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vxtss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vxtss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vxtss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 11:34:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-11 11:34:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 11 11:34:25.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3680" for this suite. • [SLOW TEST:29.216 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":6,"skipped":108,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 11 11:34:26.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 11 11:34:27.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3534" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":275,"completed":7,"skipped":116,"failed":0} ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 11 11:34:28.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Aug 11 11:34:55.724: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 11 11:34:56.266: INFO: Pod pod-with-prestop-http-hook still exists Aug 11 11:34:58.267: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 11 11:34:58.422: INFO: Pod pod-with-prestop-http-hook still exists Aug 11 11:35:00.267: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 11 11:35:00.515: INFO: Pod pod-with-prestop-http-hook still exists Aug 11 11:35:02.267: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 11 11:35:02.678: INFO: Pod pod-with-prestop-http-hook still exists Aug 11 11:35:04.267: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 11 11:35:04.683: INFO: Pod pod-with-prestop-http-hook still exists Aug 11 11:35:06.267: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 11 11:35:06.533: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 11 11:35:08.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9463" for this suite. • [SLOW TEST:40.527 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":8,"skipped":116,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 11 11:35:08.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's command Aug 11 11:35:14.264: INFO: Waiting up to 5m0s for pod "var-expansion-cf54eb07-09f3-4a05-89df-04ba6608f63b" in namespace "var-expansion-6349" to be "Succeeded or Failed" Aug 11 11:35:14.539: INFO: Pod "var-expansion-cf54eb07-09f3-4a05-89df-04ba6608f63b": Phase="Pending", Reason="", readiness=false. Elapsed: 275.161626ms Aug 11 11:35:16.875: INFO: Pod "var-expansion-cf54eb07-09f3-4a05-89df-04ba6608f63b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.6112766s Aug 11 11:35:19.493: INFO: Pod "var-expansion-cf54eb07-09f3-4a05-89df-04ba6608f63b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.229065182s Aug 11 11:35:22.271: INFO: Pod "var-expansion-cf54eb07-09f3-4a05-89df-04ba6608f63b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.006557343s Aug 11 11:35:25.492: INFO: Pod "var-expansion-cf54eb07-09f3-4a05-89df-04ba6608f63b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.227910595s Aug 11 11:35:28.791: INFO: Pod "var-expansion-cf54eb07-09f3-4a05-89df-04ba6608f63b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.526664088s Aug 11 11:35:31.543: INFO: Pod "var-expansion-cf54eb07-09f3-4a05-89df-04ba6608f63b": Phase="Running", Reason="", readiness=true. Elapsed: 17.279002588s Aug 11 11:35:34.150: INFO: Pod "var-expansion-cf54eb07-09f3-4a05-89df-04ba6608f63b": Phase="Running", Reason="", readiness=true. Elapsed: 19.885570601s Aug 11 11:35:36.709: INFO: Pod "var-expansion-cf54eb07-09f3-4a05-89df-04ba6608f63b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.444622912s STEP: Saw pod success Aug 11 11:35:36.709: INFO: Pod "var-expansion-cf54eb07-09f3-4a05-89df-04ba6608f63b" satisfied condition "Succeeded or Failed" Aug 11 11:35:37.020: INFO: Trying to get logs from node kali-worker2 pod var-expansion-cf54eb07-09f3-4a05-89df-04ba6608f63b container dapi-container: STEP: delete the pod Aug 11 11:35:40.376: INFO: Waiting for pod var-expansion-cf54eb07-09f3-4a05-89df-04ba6608f63b to disappear Aug 11 11:35:41.442: INFO: Pod var-expansion-cf54eb07-09f3-4a05-89df-04ba6608f63b no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 11 11:35:41.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6349" for this suite. • [SLOW TEST:35.698 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":9,"skipped":124,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 11 11:35:44.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Aug 11 11:35:46.583: INFO: >>> kubeConfig: /root/.kube/config Aug 11 11:35:51.565: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 11 11:36:09.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6524" for this suite. • [SLOW TEST:25.100 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":10,"skipped":129,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 11 11:36:09.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-1421 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Aug 11 11:36:12.743: INFO: Found 0 stateful pods, waiting for 3 Aug 11 11:36:22.875: INFO: Found 2 stateful pods, waiting for 3 Aug 11 11:36:33.992: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 11 11:36:33.992: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 11 11:36:33.992: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Aug 11 11:36:43.008: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 11 11:36:43.009: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 11 11:36:43.009: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Aug 11 11:36:52.971: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 11 11:36:52.971: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 11 11:36:52.971: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Aug 11 11:36:53.948: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1421 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 11 11:37:18.247: INFO: stderr: "I0811 11:37:17.570667 32 log.go:172] (0xc0000e1340) (0xc0002d6fa0) Create stream\nI0811 11:37:17.570713 32 log.go:172] (0xc0000e1340) (0xc0002d6fa0) Stream added, broadcasting: 1\nI0811 11:37:17.573868 32 log.go:172] (0xc0000e1340) Reply frame received for 1\nI0811 11:37:17.573975 32 log.go:172] (0xc0000e1340) (0xc000736640) Create stream\nI0811 11:37:17.573998 32 log.go:172] (0xc0000e1340) (0xc000736640) Stream added, broadcasting: 3\nI0811 11:37:17.576277 32 log.go:172] (0xc0000e1340) Reply frame received for 3\nI0811 11:37:17.576307 32 log.go:172] (0xc0000e1340) (0xc000512000) Create stream\nI0811 11:37:17.576324 32 log.go:172] (0xc0000e1340) (0xc000512000) Stream added, broadcasting: 5\nI0811 11:37:17.577157 32 log.go:172] (0xc0000e1340) Reply frame received for 5\nI0811 11:37:17.625184 32 log.go:172] (0xc0000e1340) Data frame received for 5\nI0811 11:37:17.625224 32 log.go:172] (0xc000512000) (5) Data frame handling\nI0811 11:37:17.625259 32 log.go:172] (0xc000512000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0811 11:37:18.238947 32 log.go:172] (0xc0000e1340) Data frame received for 3\nI0811 11:37:18.238985 32 log.go:172] (0xc0000e1340) Data frame received for 5\nI0811 11:37:18.239002 32 log.go:172] (0xc000512000) (5) Data frame handling\nI0811 11:37:18.239032 32 log.go:172] (0xc000736640) (3) Data frame handling\nI0811 11:37:18.239066 32 log.go:172] (0xc000736640) (3) Data frame sent\nI0811 11:37:18.239090 32 log.go:172] (0xc0000e1340) Data frame received for 3\nI0811 11:37:18.239098 32 log.go:172] (0xc000736640) (3) Data frame handling\nI0811 11:37:18.241264 32 log.go:172] (0xc0000e1340) Data frame received for 1\nI0811 11:37:18.241317 32 log.go:172] (0xc0002d6fa0) (1) Data frame handling\nI0811 11:37:18.241357 32 log.go:172] (0xc0002d6fa0) (1) Data frame sent\nI0811 11:37:18.241388 32 log.go:172] (0xc0000e1340) (0xc0002d6fa0) Stream removed, broadcasting: 1\nI0811 11:37:18.241430 32 log.go:172] (0xc0000e1340) Go away received\nI0811 11:37:18.241838 32 log.go:172] (0xc0000e1340) (0xc0002d6fa0) Stream removed, broadcasting: 1\nI0811 11:37:18.241852 32 log.go:172] (0xc0000e1340) (0xc000736640) Stream removed, broadcasting: 3\nI0811 11:37:18.241858 32 log.go:172] (0xc0000e1340) (0xc000512000) Stream removed, broadcasting: 5\n" Aug 11 11:37:18.247: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 11 11:37:18.247: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Aug 11 11:37:28.309: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Aug 11 11:37:38.388: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1421 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 11 11:37:38.635: INFO: stderr: "I0811 11:37:38.564410 63 log.go:172] (0xc00090cb00) (0xc00067d7c0) Create stream\nI0811 11:37:38.564459 63 log.go:172] (0xc00090cb00) (0xc00067d7c0) Stream added, broadcasting: 1\nI0811 11:37:38.567432 63 log.go:172] (0xc00090cb00) Reply frame received for 1\nI0811 11:37:38.567472 63 log.go:172] (0xc00090cb00) (0xc00032ebe0) Create stream\nI0811 11:37:38.567482 63 log.go:172] (0xc00090cb00) (0xc00032ebe0) Stream added, broadcasting: 3\nI0811 11:37:38.568567 63 log.go:172] (0xc00090cb00) Reply frame received for 3\nI0811 11:37:38.568594 63 log.go:172] (0xc00090cb00) (0xc00067d860) Create stream\nI0811 11:37:38.568600 63 log.go:172] (0xc00090cb00) (0xc00067d860) Stream added, broadcasting: 5\nI0811 11:37:38.569681 63 log.go:172] (0xc00090cb00) Reply frame received for 5\nI0811 11:37:38.628111 63 log.go:172] (0xc00090cb00) Data frame received for 3\nI0811 11:37:38.628147 63 log.go:172] (0xc00032ebe0) (3) Data frame handling\nI0811 11:37:38.628158 63 log.go:172] (0xc00032ebe0) (3) Data frame sent\nI0811 11:37:38.628167 63 log.go:172] (0xc00090cb00) Data frame received for 3\nI0811 11:37:38.628174 63 log.go:172] (0xc00032ebe0) (3) Data frame handling\nI0811 11:37:38.628210 63 log.go:172] (0xc00090cb00) Data frame received for 5\nI0811 11:37:38.628230 63 log.go:172] (0xc00067d860) (5) Data frame handling\nI0811 11:37:38.628244 63 log.go:172] (0xc00067d860) (5) Data frame sent\nI0811 11:37:38.628251 63 log.go:172] (0xc00090cb00) Data frame received for 5\nI0811 11:37:38.628258 63 log.go:172] (0xc00067d860) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0811 11:37:38.629814 63 log.go:172] (0xc00090cb00) Data frame received for 1\nI0811 11:37:38.629844 63 log.go:172] (0xc00067d7c0) (1) Data frame handling\nI0811 11:37:38.629863 63 log.go:172] (0xc00067d7c0) (1) Data frame sent\nI0811 11:37:38.629876 63 log.go:172] (0xc00090cb00) (0xc00067d7c0) Stream removed, broadcasting: 1\nI0811 11:37:38.629906 63 log.go:172] (0xc00090cb00) Go away received\nI0811 11:37:38.630211 63 log.go:172] (0xc00090cb00) (0xc00067d7c0) Stream removed, broadcasting: 1\nI0811 11:37:38.630230 63 log.go:172] (0xc00090cb00) (0xc00032ebe0) Stream removed, broadcasting: 3\nI0811 11:37:38.630240 63 log.go:172] (0xc00090cb00) (0xc00067d860) Stream removed, broadcasting: 5\n" Aug 11 11:37:38.635: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 11 11:37:38.635: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 11 11:37:48.687: INFO: Waiting for StatefulSet statefulset-1421/ss2 to complete update Aug 11 11:37:48.687: INFO: Waiting for Pod statefulset-1421/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 11 11:37:48.687: INFO: Waiting for Pod statefulset-1421/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 11 11:37:48.687: INFO: Waiting for Pod statefulset-1421/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 11 11:37:58.927: INFO: Waiting for StatefulSet statefulset-1421/ss2 to complete update Aug 11 11:37:58.927: INFO: Waiting for Pod statefulset-1421/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 11 11:37:58.927: INFO: Waiting for Pod statefulset-1421/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 11 11:38:09.014: INFO: Waiting for StatefulSet statefulset-1421/ss2 to complete update Aug 11 11:38:09.014: INFO: Waiting for Pod statefulset-1421/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 11 11:38:09.014: INFO: Waiting for Pod statefulset-1421/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 11 11:38:18.764: INFO: Waiting for StatefulSet statefulset-1421/ss2 to complete update Aug 11 11:38:18.764: INFO: Waiting for Pod statefulset-1421/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 11 11:38:28.694: INFO: Waiting for StatefulSet statefulset-1421/ss2 to complete update STEP: Rolling back to a previous revision Aug 11 11:38:38.938: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1421 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 11 11:38:39.797: INFO: stderr: "I0811 11:38:39.052052 86 log.go:172] (0xc000a90370) (0xc000842000) Create stream\nI0811 11:38:39.052098 86 log.go:172] (0xc000a90370) (0xc000842000) Stream added, broadcasting: 1\nI0811 11:38:39.055382 86 log.go:172] (0xc000a90370) Reply frame received for 1\nI0811 11:38:39.055424 86 log.go:172] (0xc000a90370) (0xc00048b680) Create stream\nI0811 11:38:39.055441 86 log.go:172] (0xc000a90370) (0xc00048b680) Stream added, broadcasting: 3\nI0811 11:38:39.056251 86 log.go:172] (0xc000a90370) Reply frame received for 3\nI0811 11:38:39.056275 86 log.go:172] (0xc000a90370) (0xc00063d7c0) Create stream\nI0811 11:38:39.056283 86 log.go:172] (0xc000a90370) (0xc00063d7c0) Stream added, broadcasting: 5\nI0811 11:38:39.057235 86 log.go:172] (0xc000a90370) Reply frame received for 5\nI0811 11:38:39.108719 86 log.go:172] (0xc000a90370) Data frame received for 5\nI0811 11:38:39.112844 86 log.go:172] (0xc00063d7c0) (5) Data frame handling\nI0811 11:38:39.112860 86 log.go:172] (0xc00063d7c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0811 11:38:39.788978 86 log.go:172] (0xc000a90370) Data frame received for 3\nI0811 11:38:39.789015 86 log.go:172] (0xc00048b680) (3) Data frame handling\nI0811 11:38:39.789033 86 log.go:172] (0xc00048b680) (3) Data frame sent\nI0811 11:38:39.789332 86 log.go:172] (0xc000a90370) Data frame received for 5\nI0811 11:38:39.789347 86 log.go:172] (0xc00063d7c0) (5) Data frame handling\nI0811 11:38:39.789372 86 log.go:172] (0xc000a90370) Data frame received for 3\nI0811 11:38:39.789381 86 log.go:172] (0xc00048b680) (3) Data frame handling\nI0811 11:38:39.791173 86 log.go:172] (0xc000a90370) Data frame received for 1\nI0811 11:38:39.791193 86 log.go:172] (0xc000842000) (1) Data frame handling\nI0811 11:38:39.791211 86 log.go:172] (0xc000842000) (1) Data frame sent\nI0811 11:38:39.791226 86 log.go:172] (0xc000a90370) (0xc000842000) Stream removed, broadcasting: 1\nI0811 11:38:39.791383 86 log.go:172] (0xc000a90370) Go away received\nI0811 11:38:39.791523 86 log.go:172] (0xc000a90370) (0xc000842000) Stream removed, broadcasting: 1\nI0811 11:38:39.791545 86 log.go:172] (0xc000a90370) (0xc00048b680) Stream removed, broadcasting: 3\nI0811 11:38:39.791557 86 log.go:172] (0xc000a90370) (0xc00063d7c0) Stream removed, broadcasting: 5\n" Aug 11 11:38:39.797: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 11 11:38:39.797: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 11 11:38:50.154: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Aug 11 11:39:01.842: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1421 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 11 11:39:02.315: INFO: stderr: "I0811 11:39:02.237628 104 log.go:172] (0xc0009496b0) (0xc000928820) Create stream\nI0811 11:39:02.237689 104 log.go:172] (0xc0009496b0) (0xc000928820) Stream added, broadcasting: 1\nI0811 11:39:02.241684 104 log.go:172] (0xc0009496b0) Reply frame received for 1\nI0811 11:39:02.241735 104 log.go:172] (0xc0009496b0) (0xc0007152c0) Create stream\nI0811 11:39:02.241750 104 log.go:172] (0xc0009496b0) (0xc0007152c0) Stream added, broadcasting: 3\nI0811 11:39:02.243031 104 log.go:172] (0xc0009496b0) Reply frame received for 3\nI0811 11:39:02.243085 104 log.go:172] (0xc0009496b0) (0xc0006b1680) Create stream\nI0811 11:39:02.243102 104 log.go:172] (0xc0009496b0) (0xc0006b1680) Stream added, broadcasting: 5\nI0811 11:39:02.244032 104 log.go:172] (0xc0009496b0) Reply frame received for 5\nI0811 11:39:02.305278 104 log.go:172] (0xc0009496b0) Data frame received for 3\nI0811 11:39:02.305313 104 log.go:172] (0xc0007152c0) (3) Data frame handling\nI0811 11:39:02.305334 104 log.go:172] (0xc0007152c0) (3) Data frame sent\nI0811 11:39:02.306296 104 log.go:172] (0xc0009496b0) Data frame received for 3\nI0811 11:39:02.306313 104 log.go:172] (0xc0007152c0) (3) Data frame handling\nI0811 11:39:02.307492 104 log.go:172] (0xc0009496b0) Data frame received for 5\nI0811 11:39:02.307507 104 log.go:172] (0xc0006b1680) (5) Data frame handling\nI0811 11:39:02.307521 104 log.go:172] (0xc0006b1680) (5) Data frame sent\nI0811 11:39:02.307528 104 log.go:172] (0xc0009496b0) Data frame received for 5\nI0811 11:39:02.307535 104 log.go:172] (0xc0006b1680) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0811 11:39:02.309132 104 log.go:172] (0xc0009496b0) Data frame received for 1\nI0811 11:39:02.309160 104 log.go:172] (0xc000928820) (1) Data frame handling\nI0811 11:39:02.309176 104 log.go:172] (0xc000928820) (1) Data frame sent\nI0811 11:39:02.309187 104 log.go:172] (0xc0009496b0) (0xc000928820) Stream removed, broadcasting: 1\nI0811 11:39:02.309199 104 log.go:172] (0xc0009496b0) Go away received\nI0811 11:39:02.309500 104 log.go:172] (0xc0009496b0) (0xc000928820) Stream removed, broadcasting: 1\nI0811 11:39:02.309522 104 log.go:172] (0xc0009496b0) (0xc0007152c0) Stream removed, broadcasting: 3\nI0811 11:39:02.309528 104 log.go:172] (0xc0009496b0) (0xc0006b1680) Stream removed, broadcasting: 5\n" Aug 11 11:39:02.315: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 11 11:39:02.315: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 11 11:39:13.913: INFO: Waiting for StatefulSet statefulset-1421/ss2 to complete update Aug 11 11:39:13.913: INFO: Waiting for Pod statefulset-1421/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 11 11:39:13.913: INFO: Waiting for Pod statefulset-1421/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 11 11:39:13.913: INFO: Waiting for Pod statefulset-1421/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 11 11:39:24.682: INFO: Waiting for StatefulSet statefulset-1421/ss2 to complete update Aug 11 11:39:24.682: INFO: Waiting for Pod statefulset-1421/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 11 11:39:24.682: INFO: Waiting for Pod statefulset-1421/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 11 11:39:35.189: INFO: Waiting for StatefulSet statefulset-1421/ss2 to complete update Aug 11 11:39:35.189: INFO: Waiting for Pod statefulset-1421/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 11 11:39:35.189: INFO: Waiting for Pod statefulset-1421/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 11 11:39:44.533: INFO: Waiting for StatefulSet statefulset-1421/ss2 to complete update Aug 11 11:39:44.533: INFO: Waiting for Pod statefulset-1421/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 11 11:39:53.983: INFO: Waiting for StatefulSet statefulset-1421/ss2 to complete update Aug 11 11:39:53.983: INFO: Waiting for Pod statefulset-1421/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 11 11:40:04.333: INFO: Waiting for StatefulSet statefulset-1421/ss2 to complete update Aug 11 11:40:04.333: INFO: Waiting for Pod statefulset-1421/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 11 11:40:16.016: INFO: Waiting for StatefulSet statefulset-1421/ss2 to complete update Aug 11 11:40:23.921: INFO: Waiting for StatefulSet statefulset-1421/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Aug 11 11:40:34.188: INFO: Deleting all statefulset in ns statefulset-1421 Aug 11 11:40:34.255: INFO: Scaling statefulset ss2 to 0 Aug 11 11:40:54.553: INFO: Waiting for statefulset status.replicas updated to 0 Aug 11 11:40:54.556: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 11 11:40:54.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1421" for this suite. • [SLOW TEST:284.814 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":11,"skipped":146,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 11 11:40:54.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 11 11:40:54.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-54" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":12,"skipped":172,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 11 11:40:55.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 11 11:41:11.757: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 11 11:41:11.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1911" for this suite. • [SLOW TEST:16.461 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":13,"skipped":180,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 11 11:41:11.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Aug 11 11:41:20.971: INFO: 10 pods remaining Aug 11 11:41:20.971: INFO: 0 pods has nil DeletionTimestamp Aug 11 11:41:20.971: INFO: Aug 11 11:41:22.855: INFO: 0 pods remaining Aug 11 11:41:22.855: INFO: 0 pods has nil DeletionTimestamp Aug 11 11:41:22.855: INFO: Aug 11 11:41:23.842: INFO: 0 pods remaining Aug 11 11:41:23.842: INFO: 0 pods has nil DeletionTimestamp Aug 11 11:41:23.842: INFO: STEP: Gathering metrics W0811 11:41:24.233861 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 11 11:41:24.233: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 11 11:41:24.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4143" for this suite. • [SLOW TEST:12.325 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":14,"skipped":201,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 11 11:41:24.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Aug 11 11:41:26.096: INFO: Waiting up to 5m0s for pod "downward-api-88258c0e-2c20-468c-8fa2-3289dcf553c5" in namespace "downward-api-8213" to be "Succeeded or Failed" Aug 11 11:41:26.446: INFO: Pod "downward-api-88258c0e-2c20-468c-8fa2-3289dcf553c5": Phase="Pending", Reason="", readiness=false. Elapsed: 349.702225ms Aug 11 11:41:28.509: INFO: Pod "downward-api-88258c0e-2c20-468c-8fa2-3289dcf553c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.412371714s Aug 11 11:41:30.735: INFO: Pod "downward-api-88258c0e-2c20-468c-8fa2-3289dcf553c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.638735185s Aug 11 11:41:32.739: INFO: Pod "downward-api-88258c0e-2c20-468c-8fa2-3289dcf553c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.642424886s STEP: Saw pod success Aug 11 11:41:32.739: INFO: Pod "downward-api-88258c0e-2c20-468c-8fa2-3289dcf553c5" satisfied condition "Succeeded or Failed" Aug 11 11:41:32.742: INFO: Trying to get logs from node kali-worker2 pod downward-api-88258c0e-2c20-468c-8fa2-3289dcf553c5 container dapi-container: STEP: delete the pod Aug 11 11:41:32.808: INFO: Waiting for pod downward-api-88258c0e-2c20-468c-8fa2-3289dcf553c5 to disappear Aug 11 11:41:32.876: INFO: Pod downward-api-88258c0e-2c20-468c-8fa2-3289dcf553c5 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 11 11:41:32.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8213" for this suite. • [SLOW TEST:8.644 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":15,"skipped":204,"failed":0} SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 11 11:41:32.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-520 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 11 11:41:33.102: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 11 11:41:33.295: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 11 11:41:35.352: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 11 11:41:37.299: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 11 11:41:39.300: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 11 11:41:41.300: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 11 11:41:43.300: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 11 11:41:45.300: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 11 11:41:47.299: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 11 11:41:47.304: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 11 11:41:49.309: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 11 11:41:53.439: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.4:8080/dial?request=hostname&protocol=http&host=10.244.2.3&port=8080&tries=1'] Namespace:pod-network-test-520 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 11:41:53.439: INFO: >>> kubeConfig: /root/.kube/config I0811 11:41:53.470629 7 log.go:172] (0xc002c48bb0) (0xc0013dabe0) Create stream I0811 11:41:53.470665 7 log.go:172] (0xc002c48bb0) (0xc0013dabe0) Stream added, broadcasting: 1 I0811 11:41:53.473360 7 log.go:172] (0xc002c48bb0) Reply frame received for 1 I0811 11:41:53.473409 7 log.go:172] (0xc002c48bb0) (0xc0013dac80) Create stream I0811 11:41:53.473425 7 log.go:172] (0xc002c48bb0) (0xc0013dac80) Stream added, broadcasting: 3 I0811 11:41:53.474467 7 log.go:172] (0xc002c48bb0) Reply frame received for 3 I0811 11:41:53.474512 7 log.go:172] (0xc002c48bb0) (0xc0013dad20) Create stream I0811 11:41:53.474532 7 log.go:172] (0xc002c48bb0) (0xc0013dad20) Stream added, broadcasting: 5 I0811 11:41:53.475669 7 log.go:172] (0xc002c48bb0) Reply frame received for 5 I0811 11:41:53.541056 7 log.go:172] (0xc002c48bb0) Data frame received for 3 I0811 11:41:53.541073 7 log.go:172] (0xc0013dac80) (3) Data frame handling I0811 11:41:53.541084 7 log.go:172] (0xc0013dac80) (3) Data frame sent I0811 11:41:53.541624 7 log.go:172] (0xc002c48bb0) Data frame received for 3 I0811 11:41:53.541643 7 log.go:172] (0xc0013dac80) (3) Data frame handling I0811 11:41:53.541795 7 log.go:172] (0xc002c48bb0) Data frame received for 5 I0811 11:41:53.541823 7 log.go:172] (0xc0013dad20) (5) Data frame handling I0811 11:41:53.543361 7 log.go:172] (0xc002c48bb0) Data frame received for 1 I0811 11:41:53.543386 7 log.go:172] (0xc0013dabe0) (1) Data frame handling I0811 11:41:53.543405 7 log.go:172] (0xc0013dabe0) (1) Data frame sent I0811 11:41:53.543422 7 log.go:172] (0xc002c48bb0) (0xc0013dabe0) Stream removed, broadcasting: 1 I0811 11:41:53.543439 7 log.go:172] (0xc002c48bb0) Go away received I0811 11:41:53.543842 7 log.go:172] (0xc002c48bb0) (0xc0013dabe0) Stream removed, broadcasting: 1 I0811 11:41:53.543861 7 log.go:172] (0xc002c48bb0) (0xc0013dac80) Stream removed, broadcasting: 3 I0811 11:41:53.543872 7 log.go:172] (0xc002c48bb0) (0xc0013dad20) Stream removed, broadcasting: 5 Aug 11 11:41:53.543: INFO: Waiting for responses: map[] Aug 11 11:41:53.547: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.4:8080/dial?request=hostname&protocol=http&host=10.244.1.251&port=8080&tries=1'] Namespace:pod-network-test-520 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 11:41:53.547: INFO: >>> kubeConfig: /root/.kube/config I0811 11:41:53.577671 7 log.go:172] (0xc002d62420) (0xc001350aa0) Create stream I0811 11:41:53.577694 7 log.go:172] (0xc002d62420) (0xc001350aa0) Stream added, broadcasting: 1 I0811 11:41:53.582927 7 log.go:172] (0xc002d62420) Reply frame received for 1 I0811 11:41:53.582971 7 log.go:172] (0xc002d62420) (0xc0012fa960) Create stream I0811 11:41:53.582984 7 log.go:172] (0xc002d62420) (0xc0012fa960) Stream added, broadcasting: 3 I0811 11:41:53.584028 7 log.go:172] (0xc002d62420) Reply frame received for 3 I0811 11:41:53.584068 7 log.go:172] (0xc002d62420) (0xc000fc70e0) Create stream I0811 11:41:53.584078 7 log.go:172] (0xc002d62420) (0xc000fc70e0) Stream added, broadcasting: 5 I0811 11:41:53.585277 7 log.go:172] (0xc002d62420) Reply frame received for 5 I0811 11:41:53.654891 7 log.go:172] (0xc002d62420) Data frame received for 3 I0811 11:41:53.654978 7 log.go:172] (0xc0012fa960) (3) Data frame handling I0811 11:41:53.655012 7 log.go:172] (0xc0012fa960) (3) Data frame sent I0811 11:41:53.655403 7 log.go:172] (0xc002d62420) Data frame received for 3 I0811 11:41:53.655436 7 log.go:172] (0xc0012fa960) (3) Data frame handling I0811 11:41:53.655766 7 log.go:172] (0xc002d62420) Data frame received for 5 I0811 11:41:53.655780 7 log.go:172] (0xc000fc70e0) (5) Data frame handling I0811 11:41:53.657479 7 log.go:172] (0xc002d62420) Data frame received for 1 I0811 11:41:53.657505 7 log.go:172] (0xc001350aa0) (1) Data frame handling I0811 11:41:53.657531 7 log.go:172] (0xc001350aa0) (1) Data frame sent I0811 11:41:53.657553 7 log.go:172] (0xc002d62420) (0xc001350aa0) Stream removed, broadcasting: 1 I0811 11:41:53.657577 7 log.go:172] (0xc002d62420) Go away received I0811 11:41:53.657699 7 log.go:172] (0xc002d62420) (0xc001350aa0) Stream removed, broadcasting: 1 I0811 11:41:53.657718 7 log.go:172] (0xc002d62420) (0xc0012fa960) Stream removed, broadcasting: 3 I0811 11:41:53.657729 7 log.go:172] (0xc002d62420) (0xc000fc70e0) Stream removed, broadcasting: 5 Aug 11 11:41:53.657: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 11 11:41:53.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-520" for this suite. • [SLOW TEST:20.805 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":16,"skipped":209,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 11 11:41:53.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-01b16278-6cce-413d-9da2-59843f8ecedf STEP: Creating a pod to test consume secrets Aug 11 11:41:53.777: INFO: Waiting up to 5m0s for pod "pod-secrets-88f1473f-8518-4088-8ab9-29060a7b14a7" in namespace "secrets-3054" to be "Succeeded or Failed" Aug 11 11:41:53.782: INFO: Pod "pod-secrets-88f1473f-8518-4088-8ab9-29060a7b14a7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.63394ms Aug 11 11:41:55.891: INFO: Pod "pod-secrets-88f1473f-8518-4088-8ab9-29060a7b14a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11368105s Aug 11 11:41:57.895: INFO: Pod "pod-secrets-88f1473f-8518-4088-8ab9-29060a7b14a7": Phase="Running", Reason="", readiness=true. Elapsed: 4.118466459s Aug 11 11:41:59.900: INFO: Pod "pod-secrets-88f1473f-8518-4088-8ab9-29060a7b14a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.123066761s STEP: Saw pod success Aug 11 11:41:59.900: INFO: Pod "pod-secrets-88f1473f-8518-4088-8ab9-29060a7b14a7" satisfied condition "Succeeded or Failed" Aug 11 11:41:59.903: INFO: Trying to get logs from node kali-worker pod pod-secrets-88f1473f-8518-4088-8ab9-29060a7b14a7 container secret-volume-test: STEP: delete the pod Aug 11 11:42:00.125: INFO: Waiting for pod pod-secrets-88f1473f-8518-4088-8ab9-29060a7b14a7 to disappear Aug 11 11:42:00.190: INFO: Pod pod-secrets-88f1473f-8518-4088-8ab9-29060a7b14a7 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 11 11:42:00.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3054" for this suite. • [SLOW TEST:7.133 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":17,"skipped":223,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 11 11:42:00.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Aug 11 11:42:01.167: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8393' Aug 11 11:42:01.925: INFO: stderr: "" Aug 11 11:42:01.925: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Aug 11 11:42:02.929: INFO: Selector matched 1 pods for map[app:agnhost] Aug 11 11:42:02.929: INFO: Found 0 / 1 Aug 11 11:42:03.929: INFO: Selector matched 1 pods for map[app:agnhost] Aug 11 11:42:03.929: INFO: Found 0 / 1 Aug 11 11:42:05.215: INFO: Selector matched 1 pods for map[app:agnhost] Aug 11 11:42:05.215: INFO: Found 0 / 1 Aug 11 11:42:05.956: INFO: Selector matched 1 pods for map[app:agnhost] Aug 11 11:42:05.956: INFO: Found 0 / 1 Aug 11 11:42:06.930: INFO: Selector matched 1 pods for map[app:agnhost] Aug 11 11:42:06.930: INFO: Found 1 / 1 Aug 11 11:42:06.930: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Aug 11 11:42:06.934: INFO: Selector matched 1 pods for map[app:agnhost] Aug 11 11:42:06.934: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 11 11:42:06.934: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config patch pod agnhost-master-m2tg2 --namespace=kubectl-8393 -p {"metadata":{"annotations":{"x":"y"}}}' Aug 11 11:42:07.046: INFO: stderr: "" Aug 11 11:42:07.046: INFO: stdout: "pod/agnhost-master-m2tg2 patched\n" STEP: checking annotations Aug 11 11:42:07.059: INFO: Selector matched 1 pods for map[app:agnhost] Aug 11 11:42:07.059: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 11 11:42:07.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8393" for this suite. • [SLOW TEST:6.242 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":275,"completed":18,"skipped":227,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 11 11:42:07.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 11 11:42:07.790: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 11 11:42:10.156: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732742927, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732742927, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732742927, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732742927, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 11 11:42:13.759: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 11 11:42:14.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1673" for this suite. STEP: Destroying namespace "webhook-1673-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.908 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":19,"skipped":232,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 11 11:42:16.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 11 11:42:17.798: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1a47e0b3-2fc5-4fb8-8a34-5427fab9bb25" in namespace "projected-3135" to be "Succeeded or Failed" Aug 11 11:42:17.944: INFO: Pod "downwardapi-volume-1a47e0b3-2fc5-4fb8-8a34-5427fab9bb25": Phase="Pending", Reason="", readiness=false. Elapsed: 146.539309ms Aug 11 11:42:19.958: INFO: Pod "downwardapi-volume-1a47e0b3-2fc5-4fb8-8a34-5427fab9bb25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160137288s Aug 11 11:42:21.975: INFO: Pod "downwardapi-volume-1a47e0b3-2fc5-4fb8-8a34-5427fab9bb25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.176701524s Aug 11 11:42:24.592: INFO: Pod "downwardapi-volume-1a47e0b3-2fc5-4fb8-8a34-5427fab9bb25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.794049683s STEP: Saw pod success Aug 11 11:42:24.592: INFO: Pod "downwardapi-volume-1a47e0b3-2fc5-4fb8-8a34-5427fab9bb25" satisfied condition "Succeeded or Failed" Aug 11 11:42:24.595: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-1a47e0b3-2fc5-4fb8-8a34-5427fab9bb25 container client-container: STEP: delete the pod Aug 11 11:42:25.347: INFO: Waiting for pod downwardapi-volume-1a47e0b3-2fc5-4fb8-8a34-5427fab9bb25 to disappear Aug 11 11:42:25.742: INFO: Pod downwardapi-volume-1a47e0b3-2fc5-4fb8-8a34-5427fab9bb25 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 11 11:42:25.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3135" for this suite. • [SLOW TEST:8.958 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":20,"skipped":241,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 11 11:42:25.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-807.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-807.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-807.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-807.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-807.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-807.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-807.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-807.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-807.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-807.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 11 11:42:34.305: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:34.308: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:34.310: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:34.313: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:34.322: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:34.325: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:34.327: INFO: Unable to read jessie_udp@dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:34.331: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:34.337: INFO: Lookups using dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local wheezy_udp@dns-test-service-2.dns-807.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-807.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local jessie_udp@dns-test-service-2.dns-807.svc.cluster.local jessie_tcp@dns-test-service-2.dns-807.svc.cluster.local] Aug 11 11:42:39.341: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:39.343: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:39.346: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:39.348: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:39.356: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:39.358: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:39.361: INFO: Unable to read jessie_udp@dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:39.363: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:39.367: INFO: Lookups using dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local wheezy_udp@dns-test-service-2.dns-807.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-807.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local jessie_udp@dns-test-service-2.dns-807.svc.cluster.local jessie_tcp@dns-test-service-2.dns-807.svc.cluster.local] Aug 11 11:42:44.449: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:44.467: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:44.470: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:44.511: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:44.524: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:44.526: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:44.529: INFO: Unable to read jessie_udp@dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:44.531: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:44.537: INFO: Lookups using dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local wheezy_udp@dns-test-service-2.dns-807.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-807.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local jessie_udp@dns-test-service-2.dns-807.svc.cluster.local jessie_tcp@dns-test-service-2.dns-807.svc.cluster.local] Aug 11 11:42:49.341: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:49.343: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:49.346: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:49.349: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:49.356: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:49.360: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:49.363: INFO: Unable to read jessie_udp@dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:49.367: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:49.371: INFO: Lookups using dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local wheezy_udp@dns-test-service-2.dns-807.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-807.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local jessie_udp@dns-test-service-2.dns-807.svc.cluster.local jessie_tcp@dns-test-service-2.dns-807.svc.cluster.local] Aug 11 11:42:54.377: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:54.381: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:54.385: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:54.387: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:54.395: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:54.398: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:54.400: INFO: Unable to read jessie_udp@dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:54.403: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:42:54.408: INFO: Lookups using dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local wheezy_udp@dns-test-service-2.dns-807.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-807.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local jessie_udp@dns-test-service-2.dns-807.svc.cluster.local jessie_tcp@dns-test-service-2.dns-807.svc.cluster.local] Aug 11 11:42:59.899: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:43:00.526: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:43:00.529: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:43:00.936: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:43:01.279: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:43:01.283: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:43:01.286: INFO: Unable to read jessie_udp@dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:43:01.288: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-807.svc.cluster.local from pod dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203: the server could not find the requested resource (get pods dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203) Aug 11 11:43:01.292: INFO: Lookups using dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local wheezy_udp@dns-test-service-2.dns-807.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-807.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-807.svc.cluster.local jessie_udp@dns-test-service-2.dns-807.svc.cluster.local jessie_tcp@dns-test-service-2.dns-807.svc.cluster.local] Aug 11 11:43:04.374: INFO: DNS probes using dns-807/dns-test-1658f75b-8f83-4d98-a26d-ccc89203b203 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 11 11:43:05.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-807" for this suite. • [SLOW TEST:39.882 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":21,"skipped":247,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 11 11:43:05.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 11 11:43:06.087: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7e127c00-a7a0-43ef-9d01-e97a2ab0ca05" in namespace "downward-api-753" to be "Succeeded or Failed" Aug 11 11:43:06.121: INFO: Pod "downwardapi-volume-7e127c00-a7a0-43ef-9d01-e97a2ab0ca05": Phase="Pending", Reason="", readiness=false. Elapsed: 34.117401ms Aug 11 11:43:08.123: INFO: Pod "downwardapi-volume-7e127c00-a7a0-43ef-9d01-e97a2ab0ca05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036452453s Aug 11 11:43:10.126: INFO: Pod "downwardapi-volume-7e127c00-a7a0-43ef-9d01-e97a2ab0ca05": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039581818s Aug 11 11:43:12.155: INFO: Pod "downwardapi-volume-7e127c00-a7a0-43ef-9d01-e97a2ab0ca05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.068386842s STEP: Saw pod success Aug 11 11:43:12.155: INFO: Pod "downwardapi-volume-7e127c00-a7a0-43ef-9d01-e97a2ab0ca05" satisfied condition "Succeeded or Failed" Aug 11 11:43:12.157: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-7e127c00-a7a0-43ef-9d01-e97a2ab0ca05 container client-container: STEP: delete the pod Aug 11 11:43:12.234: INFO: Waiting for pod downwardapi-volume-7e127c00-a7a0-43ef-9d01-e97a2ab0ca05 to disappear Aug 11 11:43:12.245: INFO: Pod downwardapi-volume-7e127c00-a7a0-43ef-9d01-e97a2ab0ca05 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 11 11:43:12.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-753" for this suite. • [SLOW TEST:6.436 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":22,"skipped":262,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 11 11:43:12.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-21f3d376-ba51-446d-8c6b-58a962d67d06 STEP: Creating a pod to test consume configMaps Aug 11 11:43:12.349: INFO: Waiting up to 5m0s for pod "pod-configmaps-aeb86b64-2d20-4968-b4fd-771d30f06ec2" in namespace "configmap-3903" to be "Succeeded or Failed" Aug 11 11:43:12.371: INFO: Pod "pod-configmaps-aeb86b64-2d20-4968-b4fd-771d30f06ec2": Phase="Pending", Reason="", readiness=false. Elapsed: 22.367753ms Aug 11 11:43:14.794: INFO: Pod "pod-configmaps-aeb86b64-2d20-4968-b4fd-771d30f06ec2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.445174527s Aug 11 11:43:17.022: INFO: Pod "pod-configmaps-aeb86b64-2d20-4968-b4fd-771d30f06ec2": Phase="Running", Reason="", readiness=true. Elapsed: 4.673601724s Aug 11 11:43:19.026: INFO: Pod "pod-configmaps-aeb86b64-2d20-4968-b4fd-771d30f06ec2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.677900761s STEP: Saw pod success Aug 11 11:43:19.026: INFO: Pod "pod-configmaps-aeb86b64-2d20-4968-b4fd-771d30f06ec2" satisfied condition "Succeeded or Failed" Aug 11 11:43:19.605: INFO: Trying to get logs from node kali-worker pod pod-configmaps-aeb86b64-2d20-4968-b4fd-771d30f06ec2 container configmap-volume-test: STEP: delete the pod Aug 11 11:43:20.643: INFO: Waiting for pod pod-configmaps-aeb86b64-2d20-4968-b4fd-771d30f06ec2 to disappear Aug 11 11:43:20.672: INFO: Pod pod-configmaps-aeb86b64-2d20-4968-b4fd-771d30f06ec2 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 11 11:43:20.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3903" for this suite. • [SLOW TEST:8.702 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":23,"skipped":265,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 11 11:43:20.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 11 11:43:39.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7712" for this suite. • [SLOW TEST:18.221 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":24,"skipped":280,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 11 11:43:39.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Aug 11 11:43:39.547: INFO: Created pod &Pod{ObjectMeta:{dns-4455 dns-4455 /api/v1/namespaces/dns-4455/pods/dns-4455 5e4e3852-c5ff-428c-8fca-2b21563e7585 8546522 0 2020-08-11 11:43:39 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-08-11 11:43:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 67 111 110 102 105 103 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 115 101 114 118 101 114 115 34 58 123 125 44 34 102 58 115 101 97 114 99 104 101 115 34 58 123 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mrg7n,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mrg7n,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mrg7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 11:43:39.594: INFO: The status of Pod dns-4455 is Pending, waiting for it to be Running (with Ready = true) Aug 11 11:43:41.839: INFO: The status of Pod dns-4455 is Pending, waiting for it to be Running (with Ready = true) Aug 11 11:43:43.772: INFO: The status of Pod dns-4455 is Pending, waiting for it to be Running (with Ready = true) Aug 11 11:43:45.682: INFO: The status of Pod dns-4455 is Pending, waiting for it to be Running (with Ready = true) Aug 11 11:43:47.610: INFO: The status of Pod dns-4455 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Aug 11 11:43:47.610: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-4455 PodName:dns-4455 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 11:43:47.610: INFO: >>> kubeConfig: /root/.kube/config I0811 11:43:47.636368 7 log.go:172] (0xc002da2370) (0xc00148f5e0) Create stream I0811 11:43:47.636394 7 log.go:172] (0xc002da2370) (0xc00148f5e0) Stream added, broadcasting: 1 I0811 11:43:47.638182 7 log.go:172] (0xc002da2370) Reply frame received for 1 I0811 11:43:47.638217 7 log.go:172] (0xc002da2370) (0xc00175c0a0) Create stream I0811 11:43:47.638234 7 log.go:172] (0xc002da2370) (0xc00175c0a0) Stream added, broadcasting: 3 I0811 11:43:47.639062 7 log.go:172] (0xc002da2370) Reply frame received for 3 I0811 11:43:47.639090 7 log.go:172] (0xc002da2370) (0xc00148f720) Create stream I0811 11:43:47.639100 7 log.go:172] (0xc002da2370) (0xc00148f720) Stream added, broadcasting: 5 I0811 11:43:47.639916 7 log.go:172] (0xc002da2370) Reply frame received for 5 I0811 11:43:47.700503 7 log.go:172] (0xc002da2370) Data frame received for 3 I0811 11:43:47.700538 7 log.go:172] (0xc00175c0a0) (3) Data frame handling I0811 11:43:47.700563 7 log.go:172] (0xc00175c0a0) (3) Data frame sent I0811 11:43:47.701967 7 log.go:172] (0xc002da2370) Data frame received for 5 I0811 11:43:47.701996 7 log.go:172] (0xc00148f720) (5) Data frame handling I0811 11:43:47.702021 7 log.go:172] (0xc002da2370) Data frame received for 3 I0811 11:43:47.702033 7 log.go:172] (0xc00175c0a0) (3) Data frame handling I0811 11:43:47.704010 7 log.go:172] (0xc002da2370) Data frame received for 1 I0811 11:43:47.704040 7 log.go:172] (0xc00148f5e0) (1) Data frame handling I0811 11:43:47.704062 7 log.go:172] (0xc00148f5e0) (1) Data frame sent I0811 11:43:47.704092 7 log.go:172] (0xc002da2370) (0xc00148f5e0) Stream removed, broadcasting: 1 I0811 11:43:47.704156 7 log.go:172] (0xc002da2370) Go away received I0811 11:43:47.704300 7 log.go:172] (0xc002da2370) (0xc00148f5e0) Stream removed, broadcasting: 1 I0811 11:43:47.704323 7 log.go:172] (0xc002da2370) (0xc00175c0a0) Stream removed, broadcasting: 3 I0811 11:43:47.704336 7 log.go:172] (0xc002da2370) (0xc00148f720) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Aug 11 11:43:47.704: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-4455 PodName:dns-4455 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 11:43:47.704: INFO: >>> kubeConfig: /root/.kube/config I0811 11:43:48.096717 7 log.go:172] (0xc002b2e370) (0xc0016fa500) Create stream I0811 11:43:48.096871 7 log.go:172] (0xc002b2e370) (0xc0016fa500) Stream added, broadcasting: 1 I0811 11:43:48.098257 7 log.go:172] (0xc002b2e370) Reply frame received for 1 I0811 11:43:48.098292 7 log.go:172] (0xc002b2e370) (0xc00175c320) Create stream I0811 11:43:48.098303 7 log.go:172] (0xc002b2e370) (0xc00175c320) Stream added, broadcasting: 3 I0811 11:43:48.098889 7 log.go:172] (0xc002b2e370) Reply frame received for 3 I0811 11:43:48.098914 7 log.go:172] (0xc002b2e370) (0xc0016fa6e0) Create stream I0811 11:43:48.098923 7 log.go:172] (0xc002b2e370) (0xc0016fa6e0) Stream added, broadcasting: 5 I0811 11:43:48.099464 7 log.go:172] (0xc002b2e370) Reply frame received for 5 I0811 11:43:48.163289 7 log.go:172] (0xc002b2e370) Data frame received for 3 I0811 11:43:48.163359 7 log.go:172] (0xc00175c320) (3) Data frame handling I0811 11:43:48.163400 7 log.go:172] (0xc00175c320) (3) Data frame sent I0811 11:43:48.165066 7 log.go:172] (0xc002b2e370) Data frame received for 3 I0811 11:43:48.165081 7 log.go:172] (0xc00175c320) (3) Data frame handling I0811 11:43:48.165136 7 log.go:172] (0xc002b2e370) Data frame received for 5 I0811 11:43:48.165160 7 log.go:172] (0xc0016fa6e0) (5) Data frame handling I0811 11:43:48.166487 7 log.go:172] (0xc002b2e370) Data frame received for 1 I0811 11:43:48.166513 7 log.go:172] (0xc0016fa500) (1) Data frame handling I0811 11:43:48.166538 7 log.go:172] (0xc0016fa500) (1) Data frame sent I0811 11:43:48.166557 7 log.go:172] (0xc002b2e370) (0xc0016fa500) Stream removed, broadcasting: 1 I0811 11:43:48.166595 7 log.go:172] (0xc002b2e370) Go away received I0811 11:43:48.166620 7 log.go:172] (0xc002b2e370) (0xc0016fa500) Stream removed, broadcasting: 1 I0811 11:43:48.166634 7 log.go:172] (0xc002b2e370) (0xc00175c320) Stream removed, broadcasting: 3 I0811 11:43:48.166644 7 log.go:172] (0xc002b2e370) (0xc0016fa6e0) Stream removed, broadcasting: 5 Aug 11 11:43:48.166: INFO: Deleting pod dns-4455... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 11 11:43:48.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4455" for this suite. • [SLOW TEST:9.433 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":25,"skipped":297,"failed":0} S ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 11 11:43:48.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 11 11:43:49.118: INFO: (0) /api/v1/nodes/kali-worker:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap configmap-346/configmap-test-42a98ec5-4ffa-446b-bd6c-e84ae481d9bc
STEP: Creating a pod to test consume configMaps
Aug 11 11:43:50.118: INFO: Waiting up to 5m0s for pod "pod-configmaps-bb15e868-9ef6-42cc-9403-800d6661c89d" in namespace "configmap-346" to be "Succeeded or Failed"
Aug 11 11:43:50.169: INFO: Pod "pod-configmaps-bb15e868-9ef6-42cc-9403-800d6661c89d": Phase="Pending", Reason="", readiness=false. Elapsed: 51.262471ms
Aug 11 11:43:52.173: INFO: Pod "pod-configmaps-bb15e868-9ef6-42cc-9403-800d6661c89d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055403695s
Aug 11 11:43:54.177: INFO: Pod "pod-configmaps-bb15e868-9ef6-42cc-9403-800d6661c89d": Phase="Running", Reason="", readiness=true. Elapsed: 4.059859036s
Aug 11 11:43:56.182: INFO: Pod "pod-configmaps-bb15e868-9ef6-42cc-9403-800d6661c89d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.063903621s
STEP: Saw pod success
Aug 11 11:43:56.182: INFO: Pod "pod-configmaps-bb15e868-9ef6-42cc-9403-800d6661c89d" satisfied condition "Succeeded or Failed"
Aug 11 11:43:56.185: INFO: Trying to get logs from node kali-worker pod pod-configmaps-bb15e868-9ef6-42cc-9403-800d6661c89d container env-test: 
STEP: delete the pod
Aug 11 11:43:56.982: INFO: Waiting for pod pod-configmaps-bb15e868-9ef6-42cc-9403-800d6661c89d to disappear
Aug 11 11:43:57.509: INFO: Pod pod-configmaps-bb15e868-9ef6-42cc-9403-800d6661c89d no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 11:43:57.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-346" for this suite.

• [SLOW TEST:9.001 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":27,"skipped":303,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 11:43:58.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5733 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5733;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5733 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5733;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5733.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5733.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5733.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5733.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5733.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5733.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5733.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5733.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5733.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5733.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5733.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5733.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5733.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 240.164.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.164.240_udp@PTR;check="$$(dig +tcp +noall +answer +search 240.164.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.164.240_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5733 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5733;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5733 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5733;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5733.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5733.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5733.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5733.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5733.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5733.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5733.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5733.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5733.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5733.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5733.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5733.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5733.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 240.164.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.164.240_udp@PTR;check="$$(dig +tcp +noall +answer +search 240.164.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.164.240_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 11 11:44:11.016: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:11.069: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:11.107: INFO: Unable to read wheezy_udp@dns-test-service.dns-5733 from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:11.111: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5733 from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:11.158: INFO: Unable to read wheezy_udp@dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:11.161: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:11.165: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:11.167: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:11.183: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:11.186: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:11.189: INFO: Unable to read jessie_udp@dns-test-service.dns-5733 from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:11.191: INFO: Unable to read jessie_tcp@dns-test-service.dns-5733 from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:11.194: INFO: Unable to read jessie_udp@dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:11.196: INFO: Unable to read jessie_tcp@dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:11.198: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:11.200: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:11.240: INFO: Lookups using dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5733 wheezy_tcp@dns-test-service.dns-5733 wheezy_udp@dns-test-service.dns-5733.svc wheezy_tcp@dns-test-service.dns-5733.svc wheezy_udp@_http._tcp.dns-test-service.dns-5733.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5733.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5733 jessie_tcp@dns-test-service.dns-5733 jessie_udp@dns-test-service.dns-5733.svc jessie_tcp@dns-test-service.dns-5733.svc jessie_udp@_http._tcp.dns-test-service.dns-5733.svc jessie_tcp@_http._tcp.dns-test-service.dns-5733.svc]

Aug 11 11:44:16.480: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:16.483: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:16.489: INFO: Unable to read wheezy_udp@dns-test-service.dns-5733 from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:16.492: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5733 from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:16.496: INFO: Unable to read wheezy_udp@dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:16.498: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:16.500: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:16.501: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:16.601: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:16.603: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:16.605: INFO: Unable to read jessie_udp@dns-test-service.dns-5733 from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:16.606: INFO: Unable to read jessie_tcp@dns-test-service.dns-5733 from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:16.608: INFO: Unable to read jessie_udp@dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:16.610: INFO: Unable to read jessie_tcp@dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:16.612: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:16.614: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:16.626: INFO: Lookups using dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5733 wheezy_tcp@dns-test-service.dns-5733 wheezy_udp@dns-test-service.dns-5733.svc wheezy_tcp@dns-test-service.dns-5733.svc wheezy_udp@_http._tcp.dns-test-service.dns-5733.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5733.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5733 jessie_tcp@dns-test-service.dns-5733 jessie_udp@dns-test-service.dns-5733.svc jessie_tcp@dns-test-service.dns-5733.svc jessie_udp@_http._tcp.dns-test-service.dns-5733.svc jessie_tcp@_http._tcp.dns-test-service.dns-5733.svc]

Aug 11 11:44:21.407: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:21.411: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:21.415: INFO: Unable to read wheezy_udp@dns-test-service.dns-5733 from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:21.418: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5733 from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:21.420: INFO: Unable to read wheezy_udp@dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:21.421: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:22.674: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:22.930: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:23.566: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:23.568: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:23.570: INFO: Unable to read jessie_udp@dns-test-service.dns-5733 from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:23.577: INFO: Unable to read jessie_tcp@dns-test-service.dns-5733 from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:23.579: INFO: Unable to read jessie_udp@dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:23.581: INFO: Unable to read jessie_tcp@dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:23.583: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:23.585: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:24.146: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: Get https://172.30.12.66:35995/api/v1/namespaces/dns-5733/pods/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d/proxy/results/jessie_udp@_http._tcp.test-service-2.dns-5733.svc: stream error: stream ID 1883; INTERNAL_ERROR
Aug 11 11:44:26.921: INFO: Lookups using dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5733 wheezy_tcp@dns-test-service.dns-5733 wheezy_udp@dns-test-service.dns-5733.svc wheezy_tcp@dns-test-service.dns-5733.svc wheezy_udp@_http._tcp.dns-test-service.dns-5733.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5733.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5733 jessie_tcp@dns-test-service.dns-5733 jessie_udp@dns-test-service.dns-5733.svc jessie_tcp@dns-test-service.dns-5733.svc jessie_udp@_http._tcp.dns-test-service.dns-5733.svc jessie_tcp@_http._tcp.dns-test-service.dns-5733.svc jessie_udp@_http._tcp.test-service-2.dns-5733.svc]

Aug 11 11:44:31.244: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:31.250: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:31.254: INFO: Unable to read wheezy_udp@dns-test-service.dns-5733 from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:31.257: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5733 from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:31.259: INFO: Unable to read wheezy_udp@dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:31.261: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:31.264: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:31.267: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:31.282: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:31.284: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:31.286: INFO: Unable to read jessie_udp@dns-test-service.dns-5733 from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:31.288: INFO: Unable to read jessie_tcp@dns-test-service.dns-5733 from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:31.290: INFO: Unable to read jessie_udp@dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:31.293: INFO: Unable to read jessie_tcp@dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:31.296: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:31.298: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:31.357: INFO: Lookups using dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5733 wheezy_tcp@dns-test-service.dns-5733 wheezy_udp@dns-test-service.dns-5733.svc wheezy_tcp@dns-test-service.dns-5733.svc wheezy_udp@_http._tcp.dns-test-service.dns-5733.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5733.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5733 jessie_tcp@dns-test-service.dns-5733 jessie_udp@dns-test-service.dns-5733.svc jessie_tcp@dns-test-service.dns-5733.svc jessie_udp@_http._tcp.dns-test-service.dns-5733.svc jessie_tcp@_http._tcp.dns-test-service.dns-5733.svc]

Aug 11 11:44:36.282: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:36.286: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:36.289: INFO: Unable to read wheezy_udp@dns-test-service.dns-5733 from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:36.292: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5733 from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:36.296: INFO: Unable to read wheezy_udp@dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:36.299: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:36.302: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:36.306: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:36.328: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:36.331: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:36.334: INFO: Unable to read jessie_udp@dns-test-service.dns-5733 from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:36.336: INFO: Unable to read jessie_tcp@dns-test-service.dns-5733 from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:36.339: INFO: Unable to read jessie_udp@dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:36.342: INFO: Unable to read jessie_tcp@dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:36.346: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:36.349: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5733.svc from pod dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d: the server could not find the requested resource (get pods dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d)
Aug 11 11:44:36.366: INFO: Lookups using dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5733 wheezy_tcp@dns-test-service.dns-5733 wheezy_udp@dns-test-service.dns-5733.svc wheezy_tcp@dns-test-service.dns-5733.svc wheezy_udp@_http._tcp.dns-test-service.dns-5733.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5733.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5733 jessie_tcp@dns-test-service.dns-5733 jessie_udp@dns-test-service.dns-5733.svc jessie_tcp@dns-test-service.dns-5733.svc jessie_udp@_http._tcp.dns-test-service.dns-5733.svc jessie_tcp@_http._tcp.dns-test-service.dns-5733.svc]

Aug 11 11:44:41.327: INFO: DNS probes using dns-5733/dns-test-8dfadc42-cd26-4aac-933b-f8f8a5814f2d succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 11:44:46.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5733" for this suite.

• [SLOW TEST:48.775 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":28,"skipped":321,"failed":0}
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 11:44:47.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Aug 11 11:45:04.108: INFO: Successfully updated pod "labelsupdate75521b7a-c23e-4a71-a015-43a11bc00e95"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 11:45:06.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5244" for this suite.

• [SLOW TEST:19.357 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":29,"skipped":326,"failed":0}
SSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 11:45:06.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-8589, will wait for the garbage collector to delete the pods
Aug 11 11:45:14.511: INFO: Deleting Job.batch foo took: 5.66599ms
Aug 11 11:45:14.811: INFO: Terminating Job.batch foo pods took: 300.290702ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 11:45:54.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-8589" for this suite.

• [SLOW TEST:47.878 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":30,"skipped":331,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 11:45:54.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 11 11:45:55.953: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 11 11:45:57.960: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732743155, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732743155, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732743156, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732743155, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 11:45:59.995: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732743155, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732743155, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732743156, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732743155, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 11:46:02.007: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732743155, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732743155, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732743156, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732743155, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 11 11:46:04.991: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 11:46:05.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5002" for this suite.
STEP: Destroying namespace "webhook-5002-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.222 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":31,"skipped":359,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 11:46:05.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Aug 11 11:46:08.946: INFO: Pod name wrapped-volume-race-39d4d972-3203-48d9-b565-2e629c9a2530: Found 0 pods out of 5
Aug 11 11:46:13.969: INFO: Pod name wrapped-volume-race-39d4d972-3203-48d9-b565-2e629c9a2530: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-39d4d972-3203-48d9-b565-2e629c9a2530 in namespace emptydir-wrapper-322, will wait for the garbage collector to delete the pods
Aug 11 11:46:47.443: INFO: Deleting ReplicationController wrapped-volume-race-39d4d972-3203-48d9-b565-2e629c9a2530 took: 838.897326ms
Aug 11 11:46:50.043: INFO: Terminating ReplicationController wrapped-volume-race-39d4d972-3203-48d9-b565-2e629c9a2530 pods took: 2.6002655s
STEP: Creating RC which spawns configmap-volume pods
Aug 11 11:47:13.516: INFO: Pod name wrapped-volume-race-58fd7c66-982b-4a71-b15f-843ffb2c29e3: Found 0 pods out of 5
Aug 11 11:47:18.577: INFO: Pod name wrapped-volume-race-58fd7c66-982b-4a71-b15f-843ffb2c29e3: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-58fd7c66-982b-4a71-b15f-843ffb2c29e3 in namespace emptydir-wrapper-322, will wait for the garbage collector to delete the pods
Aug 11 11:47:39.228: INFO: Deleting ReplicationController wrapped-volume-race-58fd7c66-982b-4a71-b15f-843ffb2c29e3 took: 74.570113ms
Aug 11 11:47:39.928: INFO: Terminating ReplicationController wrapped-volume-race-58fd7c66-982b-4a71-b15f-843ffb2c29e3 pods took: 700.236776ms
STEP: Creating RC which spawns configmap-volume pods
Aug 11 11:47:54.072: INFO: Pod name wrapped-volume-race-822a74bc-d79d-44aa-a240-27353f4f12e5: Found 0 pods out of 5
Aug 11 11:47:59.274: INFO: Pod name wrapped-volume-race-822a74bc-d79d-44aa-a240-27353f4f12e5: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-822a74bc-d79d-44aa-a240-27353f4f12e5 in namespace emptydir-wrapper-322, will wait for the garbage collector to delete the pods
Aug 11 11:48:17.596: INFO: Deleting ReplicationController wrapped-volume-race-822a74bc-d79d-44aa-a240-27353f4f12e5 took: 74.732293ms
Aug 11 11:48:17.996: INFO: Terminating ReplicationController wrapped-volume-race-822a74bc-d79d-44aa-a240-27353f4f12e5 pods took: 400.359754ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 11:48:35.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-322" for this suite.

• [SLOW TEST:149.894 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":32,"skipped":378,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 11:48:35.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a Namespace [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a Namespace
STEP: patching the Namespace
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 11:48:35.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-1707" for this suite.
STEP: Destroying namespace "nspatchtest-d5c1ccf6-2899-4dda-8e04-1edfe1a184e9-4730" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":33,"skipped":391,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 11:48:35.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: set up a multi version CRD
Aug 11 11:48:35.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 11:48:53.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4444" for this suite.

• [SLOW TEST:18.042 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":34,"skipped":405,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 11:48:53.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 11:49:01.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-5678" for this suite.

• [SLOW TEST:7.780 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":35,"skipped":410,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 11:49:01.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9571.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9571.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9571.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9571.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9571.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9571.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9571.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9571.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9571.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9571.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9571.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 7.86.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.86.7_udp@PTR;check="$$(dig +tcp +noall +answer +search 7.86.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.86.7_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9571.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9571.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9571.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9571.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9571.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9571.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9571.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9571.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9571.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9571.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9571.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 7.86.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.86.7_udp@PTR;check="$$(dig +tcp +noall +answer +search 7.86.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.86.7_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 11 11:49:16.529: INFO: Unable to read wheezy_udp@dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:16.532: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:16.534: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:16.536: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:16.550: INFO: Unable to read jessie_udp@dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:16.552: INFO: Unable to read jessie_tcp@dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:16.554: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:16.555: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:16.568: INFO: Lookups using dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290 failed for: [wheezy_udp@dns-test-service.dns-9571.svc.cluster.local wheezy_tcp@dns-test-service.dns-9571.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local jessie_udp@dns-test-service.dns-9571.svc.cluster.local jessie_tcp@dns-test-service.dns-9571.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local]

Aug 11 11:49:22.051: INFO: Unable to read wheezy_udp@dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:22.126: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:22.639: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:22.642: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:22.663: INFO: Unable to read jessie_udp@dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:22.666: INFO: Unable to read jessie_tcp@dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:22.669: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:22.671: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:22.920: INFO: Lookups using dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290 failed for: [wheezy_udp@dns-test-service.dns-9571.svc.cluster.local wheezy_tcp@dns-test-service.dns-9571.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local jessie_udp@dns-test-service.dns-9571.svc.cluster.local jessie_tcp@dns-test-service.dns-9571.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local]

Aug 11 11:49:26.576: INFO: Unable to read wheezy_udp@dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:26.578: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:26.581: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:26.584: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:26.602: INFO: Unable to read jessie_udp@dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:26.603: INFO: Unable to read jessie_tcp@dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:26.605: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:26.610: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:26.622: INFO: Lookups using dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290 failed for: [wheezy_udp@dns-test-service.dns-9571.svc.cluster.local wheezy_tcp@dns-test-service.dns-9571.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local jessie_udp@dns-test-service.dns-9571.svc.cluster.local jessie_tcp@dns-test-service.dns-9571.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local]

Aug 11 11:49:31.602: INFO: Unable to read wheezy_udp@dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:31.615: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:31.618: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:31.622: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:32.191: INFO: Unable to read jessie_udp@dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:32.228: INFO: Unable to read jessie_tcp@dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:32.919: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:33.019: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:34.298: INFO: Lookups using dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290 failed for: [wheezy_udp@dns-test-service.dns-9571.svc.cluster.local wheezy_tcp@dns-test-service.dns-9571.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local jessie_udp@dns-test-service.dns-9571.svc.cluster.local jessie_tcp@dns-test-service.dns-9571.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local]

Aug 11 11:49:36.572: INFO: Unable to read wheezy_udp@dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:36.575: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:36.577: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:36.580: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:36.597: INFO: Unable to read jessie_udp@dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:36.600: INFO: Unable to read jessie_tcp@dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:36.602: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:36.604: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:36.874: INFO: Lookups using dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290 failed for: [wheezy_udp@dns-test-service.dns-9571.svc.cluster.local wheezy_tcp@dns-test-service.dns-9571.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local jessie_udp@dns-test-service.dns-9571.svc.cluster.local jessie_tcp@dns-test-service.dns-9571.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local]

Aug 11 11:49:41.573: INFO: Unable to read wheezy_udp@dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:41.576: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:41.578: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:41.581: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:41.598: INFO: Unable to read jessie_udp@dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:41.637: INFO: Unable to read jessie_tcp@dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:41.639: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:41.642: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local from pod dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290: the server could not find the requested resource (get pods dns-test-7bbc0909-1573-49e8-b288-2e9659985290)
Aug 11 11:49:41.668: INFO: Lookups using dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290 failed for: [wheezy_udp@dns-test-service.dns-9571.svc.cluster.local wheezy_tcp@dns-test-service.dns-9571.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local jessie_udp@dns-test-service.dns-9571.svc.cluster.local jessie_tcp@dns-test-service.dns-9571.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9571.svc.cluster.local]

Aug 11 11:49:46.626: INFO: DNS probes using dns-9571/dns-test-7bbc0909-1573-49e8-b288-2e9659985290 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 11:49:48.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9571" for this suite.

• [SLOW TEST:46.900 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":275,"completed":36,"skipped":424,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 11:49:48.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-231b9e75-33cd-4f86-be78-7946a7c9d265
STEP: Creating a pod to test consume secrets
Aug 11 11:49:49.220: INFO: Waiting up to 5m0s for pod "pod-secrets-f0d189f3-0530-4fd1-a014-fd11e679302b" in namespace "secrets-3081" to be "Succeeded or Failed"
Aug 11 11:49:49.521: INFO: Pod "pod-secrets-f0d189f3-0530-4fd1-a014-fd11e679302b": Phase="Pending", Reason="", readiness=false. Elapsed: 300.695275ms
Aug 11 11:49:51.525: INFO: Pod "pod-secrets-f0d189f3-0530-4fd1-a014-fd11e679302b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.304690518s
Aug 11 11:49:53.551: INFO: Pod "pod-secrets-f0d189f3-0530-4fd1-a014-fd11e679302b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331032377s
Aug 11 11:49:55.599: INFO: Pod "pod-secrets-f0d189f3-0530-4fd1-a014-fd11e679302b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.378904193s
STEP: Saw pod success
Aug 11 11:49:55.599: INFO: Pod "pod-secrets-f0d189f3-0530-4fd1-a014-fd11e679302b" satisfied condition "Succeeded or Failed"
Aug 11 11:49:55.636: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-f0d189f3-0530-4fd1-a014-fd11e679302b container secret-volume-test: 
STEP: delete the pod
Aug 11 11:49:55.842: INFO: Waiting for pod pod-secrets-f0d189f3-0530-4fd1-a014-fd11e679302b to disappear
Aug 11 11:49:55.893: INFO: Pod pod-secrets-f0d189f3-0530-4fd1-a014-fd11e679302b no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 11:49:55.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3081" for this suite.

• [SLOW TEST:7.554 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":37,"skipped":433,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 11:49:55.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 11 11:50:04.743: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 11:50:05.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5299" for this suite.

• [SLOW TEST:9.118 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":38,"skipped":452,"failed":0}
SSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 11:50:05.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-b7dc0d8d-54b3-4f52-827a-6fc8d889d062
STEP: Creating a pod to test consume secrets
Aug 11 11:50:06.533: INFO: Waiting up to 5m0s for pod "pod-secrets-b4bd5904-687f-477e-9b96-2cc6a2e2d746" in namespace "secrets-3498" to be "Succeeded or Failed"
Aug 11 11:50:06.865: INFO: Pod "pod-secrets-b4bd5904-687f-477e-9b96-2cc6a2e2d746": Phase="Pending", Reason="", readiness=false. Elapsed: 332.094738ms
Aug 11 11:50:09.345: INFO: Pod "pod-secrets-b4bd5904-687f-477e-9b96-2cc6a2e2d746": Phase="Pending", Reason="", readiness=false. Elapsed: 2.811459014s
Aug 11 11:50:11.348: INFO: Pod "pod-secrets-b4bd5904-687f-477e-9b96-2cc6a2e2d746": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.81482855s
STEP: Saw pod success
Aug 11 11:50:11.348: INFO: Pod "pod-secrets-b4bd5904-687f-477e-9b96-2cc6a2e2d746" satisfied condition "Succeeded or Failed"
Aug 11 11:50:11.350: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-b4bd5904-687f-477e-9b96-2cc6a2e2d746 container secret-volume-test: 
STEP: delete the pod
Aug 11 11:50:11.483: INFO: Waiting for pod pod-secrets-b4bd5904-687f-477e-9b96-2cc6a2e2d746 to disappear
Aug 11 11:50:11.531: INFO: Pod pod-secrets-b4bd5904-687f-477e-9b96-2cc6a2e2d746 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 11:50:11.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3498" for this suite.
STEP: Destroying namespace "secret-namespace-7155" for this suite.

• [SLOW TEST:6.524 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":39,"skipped":456,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 11:50:11.543: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 11:50:11.627: INFO: Creating ReplicaSet my-hostname-basic-49cb5c6a-5bb1-463d-959a-7be02b4c5397
Aug 11 11:50:11.665: INFO: Pod name my-hostname-basic-49cb5c6a-5bb1-463d-959a-7be02b4c5397: Found 0 pods out of 1
Aug 11 11:50:17.118: INFO: Pod name my-hostname-basic-49cb5c6a-5bb1-463d-959a-7be02b4c5397: Found 1 pods out of 1
Aug 11 11:50:17.118: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-49cb5c6a-5bb1-463d-959a-7be02b4c5397" is running
Aug 11 11:50:17.206: INFO: Pod "my-hostname-basic-49cb5c6a-5bb1-463d-959a-7be02b4c5397-sdqtx" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-11 11:50:11 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-11 11:50:14 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-11 11:50:14 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-11 11:50:11 +0000 UTC Reason: Message:}])
Aug 11 11:50:17.207: INFO: Trying to dial the pod
Aug 11 11:50:22.215: INFO: Controller my-hostname-basic-49cb5c6a-5bb1-463d-959a-7be02b4c5397: Got expected result from replica 1 [my-hostname-basic-49cb5c6a-5bb1-463d-959a-7be02b4c5397-sdqtx]: "my-hostname-basic-49cb5c6a-5bb1-463d-959a-7be02b4c5397-sdqtx", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 11:50:22.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-7448" for this suite.

• [SLOW TEST:10.678 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":275,"completed":40,"skipped":467,"failed":0}
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 11:50:22.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 11:50:22.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2893" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":41,"skipped":468,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 11:50:22.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Aug 11 11:50:22.839: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 11 11:50:22.870: INFO: Waiting for terminating namespaces to be deleted...
Aug 11 11:50:22.872: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Aug 11 11:50:22.886: INFO: rally-19e4df10-30wkw9yu-glqpf from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded)
Aug 11 11:50:22.886: INFO: 	Container rally-19e4df10-30wkw9yu ready: true, restart count 0
Aug 11 11:50:22.886: INFO: bin-falsefdede9f8-1224-416b-ae66-d9c903411b15 from kubelet-test-2893 started at 2020-08-11 11:50:22 +0000 UTC (1 container statuses recorded)
Aug 11 11:50:22.886: INFO: 	Container bin-falsefdede9f8-1224-416b-ae66-d9c903411b15 ready: false, restart count 0
Aug 11 11:50:22.886: INFO: my-hostname-basic-49cb5c6a-5bb1-463d-959a-7be02b4c5397-sdqtx from replicaset-7448 started at 2020-08-11 11:50:11 +0000 UTC (1 container statuses recorded)
Aug 11 11:50:22.886: INFO: 	Container my-hostname-basic-49cb5c6a-5bb1-463d-959a-7be02b4c5397 ready: true, restart count 0
Aug 11 11:50:22.886: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded)
Aug 11 11:50:22.886: INFO: 	Container kindnet-cni ready: true, restart count 1
Aug 11 11:50:22.886: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded)
Aug 11 11:50:22.886: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 11 11:50:22.886: INFO: rally-824618b1-6cukkjuh-lb7rq from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:26 +0000 UTC (1 container statuses recorded)
Aug 11 11:50:22.886: INFO: 	Container rally-824618b1-6cukkjuh ready: true, restart count 3
Aug 11 11:50:22.886: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Aug 11 11:50:22.890: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Aug 11 11:50:22.890: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 11 11:50:22.890: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Aug 11 11:50:22.890: INFO: 	Container kindnet-cni ready: true, restart count 1
Aug 11 11:50:22.890: INFO: rally-19e4df10-30wkw9yu-qbmr7 from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded)
Aug 11 11:50:22.890: INFO: 	Container rally-19e4df10-30wkw9yu ready: true, restart count 0
Aug 11 11:50:22.890: INFO: rally-824618b1-6cukkjuh-m84l4 from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:24 +0000 UTC (1 container statuses recorded)
Aug 11 11:50:22.890: INFO: 	Container rally-824618b1-6cukkjuh ready: true, restart count 3
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-1728d021-cac5-4182-8687-95587afad4a9 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-1728d021-cac5-4182-8687-95587afad4a9 off the node kali-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-1728d021-cac5-4182-8687-95587afad4a9
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 11:55:36.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9168" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:313.924 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":42,"skipped":484,"failed":0}
SSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 11:55:36.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-dcf9f417-cd7c-469a-aa22-9b5297f9a4cc in namespace container-probe-1323
Aug 11 11:55:43.465: INFO: Started pod liveness-dcf9f417-cd7c-469a-aa22-9b5297f9a4cc in namespace container-probe-1323
STEP: checking the pod's current state and verifying that restartCount is present
Aug 11 11:55:43.470: INFO: Initial restart count of pod liveness-dcf9f417-cd7c-469a-aa22-9b5297f9a4cc is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 11:59:44.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1323" for this suite.

• [SLOW TEST:248.027 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":43,"skipped":487,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 11:59:44.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 11 11:59:46.040: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 11 11:59:48.300: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732743986, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732743986, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732743986, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732743985, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 11:59:50.548: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732743986, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732743986, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732743986, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732743985, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 11:59:52.796: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732743986, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732743986, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732743986, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732743985, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 11:59:54.314: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732743986, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732743986, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732743986, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732743985, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 11 11:59:57.584: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 11:59:59.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8797" for this suite.
STEP: Destroying namespace "webhook-8797-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:21.728 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":44,"skipped":496,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:00:06.435: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 11 12:00:09.964: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d31c24e2-99d8-4588-864b-3fcf83b28ffe" in namespace "downward-api-1024" to be "Succeeded or Failed"
Aug 11 12:00:10.434: INFO: Pod "downwardapi-volume-d31c24e2-99d8-4588-864b-3fcf83b28ffe": Phase="Pending", Reason="", readiness=false. Elapsed: 470.38325ms
Aug 11 12:00:13.260: INFO: Pod "downwardapi-volume-d31c24e2-99d8-4588-864b-3fcf83b28ffe": Phase="Pending", Reason="", readiness=false. Elapsed: 3.296937995s
Aug 11 12:00:15.863: INFO: Pod "downwardapi-volume-d31c24e2-99d8-4588-864b-3fcf83b28ffe": Phase="Pending", Reason="", readiness=false. Elapsed: 5.899118679s
Aug 11 12:00:17.928: INFO: Pod "downwardapi-volume-d31c24e2-99d8-4588-864b-3fcf83b28ffe": Phase="Pending", Reason="", readiness=false. Elapsed: 7.964825742s
Aug 11 12:00:20.583: INFO: Pod "downwardapi-volume-d31c24e2-99d8-4588-864b-3fcf83b28ffe": Phase="Pending", Reason="", readiness=false. Elapsed: 10.6195559s
Aug 11 12:00:23.559: INFO: Pod "downwardapi-volume-d31c24e2-99d8-4588-864b-3fcf83b28ffe": Phase="Pending", Reason="", readiness=false. Elapsed: 13.595566415s
Aug 11 12:00:25.721: INFO: Pod "downwardapi-volume-d31c24e2-99d8-4588-864b-3fcf83b28ffe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.757559227s
STEP: Saw pod success
Aug 11 12:00:25.721: INFO: Pod "downwardapi-volume-d31c24e2-99d8-4588-864b-3fcf83b28ffe" satisfied condition "Succeeded or Failed"
Aug 11 12:00:25.723: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-d31c24e2-99d8-4588-864b-3fcf83b28ffe container client-container: 
STEP: delete the pod
Aug 11 12:00:26.020: INFO: Waiting for pod downwardapi-volume-d31c24e2-99d8-4588-864b-3fcf83b28ffe to disappear
Aug 11 12:00:26.034: INFO: Pod downwardapi-volume-d31c24e2-99d8-4588-864b-3fcf83b28ffe no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:00:26.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1024" for this suite.

• [SLOW TEST:19.803 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":45,"skipped":504,"failed":0}
SSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:00:26.238: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:01:55.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9448" for this suite.

• [SLOW TEST:90.269 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":46,"skipped":510,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:01:56.507: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: getting the auto-created API token
Aug 11 12:01:58.227: INFO: created pod pod-service-account-defaultsa
Aug 11 12:01:58.227: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Aug 11 12:01:58.462: INFO: created pod pod-service-account-mountsa
Aug 11 12:01:58.462: INFO: pod pod-service-account-mountsa service account token volume mount: true
Aug 11 12:01:58.668: INFO: created pod pod-service-account-nomountsa
Aug 11 12:01:58.668: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Aug 11 12:01:58.879: INFO: created pod pod-service-account-defaultsa-mountspec
Aug 11 12:01:58.879: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Aug 11 12:01:58.931: INFO: created pod pod-service-account-mountsa-mountspec
Aug 11 12:01:58.931: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Aug 11 12:01:59.089: INFO: created pod pod-service-account-nomountsa-mountspec
Aug 11 12:01:59.089: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Aug 11 12:01:59.981: INFO: created pod pod-service-account-defaultsa-nomountspec
Aug 11 12:01:59.981: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Aug 11 12:02:00.730: INFO: created pod pod-service-account-mountsa-nomountspec
Aug 11 12:02:00.730: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Aug 11 12:02:01.191: INFO: created pod pod-service-account-nomountsa-nomountspec
Aug 11 12:02:01.191: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:02:01.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-9662" for this suite.

• [SLOW TEST:7.746 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":275,"completed":47,"skipped":524,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:02:04.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-993
[It] should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating statefulset ss in namespace statefulset-993
Aug 11 12:02:06.477: INFO: Found 0 stateful pods, waiting for 1
Aug 11 12:02:16.836: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Aug 11 12:02:27.166: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Aug 11 12:02:36.711: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug 11 12:02:38.636: INFO: Deleting all statefulset in ns statefulset-993
Aug 11 12:02:38.896: INFO: Scaling statefulset ss to 0
Aug 11 12:03:11.767: INFO: Waiting for statefulset status.replicas updated to 0
Aug 11 12:03:11.769: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:03:11.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-993" for this suite.

• [SLOW TEST:67.545 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should have a working scale subresource [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":48,"skipped":563,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:03:11.799: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-f9d1718f-65b6-4db5-80d8-da353defec0b
STEP: Creating secret with name s-test-opt-upd-e53f1f84-a898-4d92-885f-e1bda7e2a276
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-f9d1718f-65b6-4db5-80d8-da353defec0b
STEP: Updating secret s-test-opt-upd-e53f1f84-a898-4d92-885f-e1bda7e2a276
STEP: Creating secret with name s-test-opt-create-e78a8fe9-d03c-4eba-9fb1-3222ef5de6f6
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:04:58.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9219" for this suite.

• [SLOW TEST:107.155 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":49,"skipped":586,"failed":0}
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:04:58.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-downwardapi-c9nl
STEP: Creating a pod to test atomic-volume-subpath
Aug 11 12:04:59.516: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-c9nl" in namespace "subpath-4404" to be "Succeeded or Failed"
Aug 11 12:04:59.618: INFO: Pod "pod-subpath-test-downwardapi-c9nl": Phase="Pending", Reason="", readiness=false. Elapsed: 102.664199ms
Aug 11 12:05:01.621: INFO: Pod "pod-subpath-test-downwardapi-c9nl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105684617s
Aug 11 12:05:03.970: INFO: Pod "pod-subpath-test-downwardapi-c9nl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.453905778s
Aug 11 12:05:05.980: INFO: Pod "pod-subpath-test-downwardapi-c9nl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.464662004s
Aug 11 12:05:08.436: INFO: Pod "pod-subpath-test-downwardapi-c9nl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.920278779s
Aug 11 12:05:11.209: INFO: Pod "pod-subpath-test-downwardapi-c9nl": Phase="Running", Reason="", readiness=true. Elapsed: 11.693704976s
Aug 11 12:05:13.629: INFO: Pod "pod-subpath-test-downwardapi-c9nl": Phase="Running", Reason="", readiness=true. Elapsed: 14.113662966s
Aug 11 12:05:15.675: INFO: Pod "pod-subpath-test-downwardapi-c9nl": Phase="Running", Reason="", readiness=true. Elapsed: 16.159128205s
Aug 11 12:05:17.679: INFO: Pod "pod-subpath-test-downwardapi-c9nl": Phase="Running", Reason="", readiness=true. Elapsed: 18.162990993s
Aug 11 12:05:19.683: INFO: Pod "pod-subpath-test-downwardapi-c9nl": Phase="Running", Reason="", readiness=true. Elapsed: 20.166863203s
Aug 11 12:05:21.685: INFO: Pod "pod-subpath-test-downwardapi-c9nl": Phase="Running", Reason="", readiness=true. Elapsed: 22.169713527s
Aug 11 12:05:23.689: INFO: Pod "pod-subpath-test-downwardapi-c9nl": Phase="Running", Reason="", readiness=true. Elapsed: 24.173132311s
Aug 11 12:05:25.747: INFO: Pod "pod-subpath-test-downwardapi-c9nl": Phase="Running", Reason="", readiness=true. Elapsed: 26.23157669s
Aug 11 12:05:28.281: INFO: Pod "pod-subpath-test-downwardapi-c9nl": Phase="Running", Reason="", readiness=true. Elapsed: 28.76537016s
Aug 11 12:05:30.286: INFO: Pod "pod-subpath-test-downwardapi-c9nl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.77020078s
STEP: Saw pod success
Aug 11 12:05:30.286: INFO: Pod "pod-subpath-test-downwardapi-c9nl" satisfied condition "Succeeded or Failed"
Aug 11 12:05:30.289: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-downwardapi-c9nl container test-container-subpath-downwardapi-c9nl: 
STEP: delete the pod
Aug 11 12:05:30.346: INFO: Waiting for pod pod-subpath-test-downwardapi-c9nl to disappear
Aug 11 12:05:30.358: INFO: Pod pod-subpath-test-downwardapi-c9nl no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-c9nl
Aug 11 12:05:30.358: INFO: Deleting pod "pod-subpath-test-downwardapi-c9nl" in namespace "subpath-4404"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:05:30.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4404" for this suite.

• [SLOW TEST:31.414 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":50,"skipped":588,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:05:30.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 11 12:05:30.481: INFO: Waiting up to 5m0s for pod "pod-a1a3e650-f8f6-4884-a6c7-d2555430b6ec" in namespace "emptydir-2610" to be "Succeeded or Failed"
Aug 11 12:05:30.484: INFO: Pod "pod-a1a3e650-f8f6-4884-a6c7-d2555430b6ec": Phase="Pending", Reason="", readiness=false. Elapsed: 3.259665ms
Aug 11 12:05:32.677: INFO: Pod "pod-a1a3e650-f8f6-4884-a6c7-d2555430b6ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195718993s
Aug 11 12:05:34.700: INFO: Pod "pod-a1a3e650-f8f6-4884-a6c7-d2555430b6ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.21855772s
Aug 11 12:05:36.815: INFO: Pod "pod-a1a3e650-f8f6-4884-a6c7-d2555430b6ec": Phase="Running", Reason="", readiness=true. Elapsed: 6.333696738s
Aug 11 12:05:38.850: INFO: Pod "pod-a1a3e650-f8f6-4884-a6c7-d2555430b6ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.368610127s
STEP: Saw pod success
Aug 11 12:05:38.850: INFO: Pod "pod-a1a3e650-f8f6-4884-a6c7-d2555430b6ec" satisfied condition "Succeeded or Failed"
Aug 11 12:05:38.852: INFO: Trying to get logs from node kali-worker pod pod-a1a3e650-f8f6-4884-a6c7-d2555430b6ec container test-container: 
STEP: delete the pod
Aug 11 12:05:39.035: INFO: Waiting for pod pod-a1a3e650-f8f6-4884-a6c7-d2555430b6ec to disappear
Aug 11 12:05:39.038: INFO: Pod pod-a1a3e650-f8f6-4884-a6c7-d2555430b6ec no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:05:39.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2610" for this suite.

• [SLOW TEST:8.677 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":51,"skipped":635,"failed":0}
SSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:05:39.046: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-upd-460659d0-a9da-494e-a375-1d0b26f665eb
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:05:51.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9860" for this suite.

• [SLOW TEST:13.001 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":52,"skipped":638,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:05:52.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-96013656-1e9a-47e0-b343-8423abf13baf in namespace container-probe-4256
Aug 11 12:06:01.906: INFO: Started pod liveness-96013656-1e9a-47e0-b343-8423abf13baf in namespace container-probe-4256
STEP: checking the pod's current state and verifying that restartCount is present
Aug 11 12:06:02.009: INFO: Initial restart count of pod liveness-96013656-1e9a-47e0-b343-8423abf13baf is 0
Aug 11 12:06:19.313: INFO: Restart count of pod container-probe-4256/liveness-96013656-1e9a-47e0-b343-8423abf13baf is now 1 (17.303219219s elapsed)
Aug 11 12:06:37.988: INFO: Restart count of pod container-probe-4256/liveness-96013656-1e9a-47e0-b343-8423abf13baf is now 2 (35.978423325s elapsed)
Aug 11 12:07:05.411: INFO: Restart count of pod container-probe-4256/liveness-96013656-1e9a-47e0-b343-8423abf13baf is now 3 (1m3.401320498s elapsed)
Aug 11 12:07:28.412: INFO: Restart count of pod container-probe-4256/liveness-96013656-1e9a-47e0-b343-8423abf13baf is now 4 (1m26.402461726s elapsed)
Aug 11 12:08:38.073: INFO: Restart count of pod container-probe-4256/liveness-96013656-1e9a-47e0-b343-8423abf13baf is now 5 (2m36.06369946s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:08:38.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4256" for this suite.

• [SLOW TEST:166.952 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":53,"skipped":667,"failed":0}
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:08:39.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-projected-rhv7
STEP: Creating a pod to test atomic-volume-subpath
Aug 11 12:08:39.320: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-rhv7" in namespace "subpath-4852" to be "Succeeded or Failed"
Aug 11 12:08:39.340: INFO: Pod "pod-subpath-test-projected-rhv7": Phase="Pending", Reason="", readiness=false. Elapsed: 19.906339ms
Aug 11 12:08:41.739: INFO: Pod "pod-subpath-test-projected-rhv7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.418363867s
Aug 11 12:08:43.743: INFO: Pod "pod-subpath-test-projected-rhv7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.422834571s
Aug 11 12:08:45.747: INFO: Pod "pod-subpath-test-projected-rhv7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.426608506s
Aug 11 12:08:47.751: INFO: Pod "pod-subpath-test-projected-rhv7": Phase="Running", Reason="", readiness=true. Elapsed: 8.430328479s
Aug 11 12:08:49.756: INFO: Pod "pod-subpath-test-projected-rhv7": Phase="Running", Reason="", readiness=true. Elapsed: 10.435341477s
Aug 11 12:08:52.107: INFO: Pod "pod-subpath-test-projected-rhv7": Phase="Running", Reason="", readiness=true. Elapsed: 12.78727641s
Aug 11 12:08:54.111: INFO: Pod "pod-subpath-test-projected-rhv7": Phase="Running", Reason="", readiness=true. Elapsed: 14.791220803s
Aug 11 12:08:56.115: INFO: Pod "pod-subpath-test-projected-rhv7": Phase="Running", Reason="", readiness=true. Elapsed: 16.794809926s
Aug 11 12:08:58.241: INFO: Pod "pod-subpath-test-projected-rhv7": Phase="Running", Reason="", readiness=true. Elapsed: 18.920547685s
Aug 11 12:09:00.245: INFO: Pod "pod-subpath-test-projected-rhv7": Phase="Running", Reason="", readiness=true. Elapsed: 20.924504311s
Aug 11 12:09:02.289: INFO: Pod "pod-subpath-test-projected-rhv7": Phase="Running", Reason="", readiness=true. Elapsed: 22.968658995s
Aug 11 12:09:04.397: INFO: Pod "pod-subpath-test-projected-rhv7": Phase="Running", Reason="", readiness=true. Elapsed: 25.076511065s
Aug 11 12:09:06.438: INFO: Pod "pod-subpath-test-projected-rhv7": Phase="Running", Reason="", readiness=true. Elapsed: 27.117702878s
Aug 11 12:09:08.442: INFO: Pod "pod-subpath-test-projected-rhv7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.12158488s
STEP: Saw pod success
Aug 11 12:09:08.442: INFO: Pod "pod-subpath-test-projected-rhv7" satisfied condition "Succeeded or Failed"
Aug 11 12:09:08.444: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-projected-rhv7 container test-container-subpath-projected-rhv7: 
STEP: delete the pod
Aug 11 12:09:08.659: INFO: Waiting for pod pod-subpath-test-projected-rhv7 to disappear
Aug 11 12:09:08.833: INFO: Pod pod-subpath-test-projected-rhv7 no longer exists
STEP: Deleting pod pod-subpath-test-projected-rhv7
Aug 11 12:09:08.833: INFO: Deleting pod "pod-subpath-test-projected-rhv7" in namespace "subpath-4852"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:09:08.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4852" for this suite.

• [SLOW TEST:29.845 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":54,"skipped":668,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:09:08.846: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override all
Aug 11 12:09:09.363: INFO: Waiting up to 5m0s for pod "client-containers-7014ed40-8cc3-4462-90af-1ce3c1e69711" in namespace "containers-4996" to be "Succeeded or Failed"
Aug 11 12:09:09.441: INFO: Pod "client-containers-7014ed40-8cc3-4462-90af-1ce3c1e69711": Phase="Pending", Reason="", readiness=false. Elapsed: 77.909051ms
Aug 11 12:09:11.547: INFO: Pod "client-containers-7014ed40-8cc3-4462-90af-1ce3c1e69711": Phase="Pending", Reason="", readiness=false. Elapsed: 2.184125906s
Aug 11 12:09:13.551: INFO: Pod "client-containers-7014ed40-8cc3-4462-90af-1ce3c1e69711": Phase="Pending", Reason="", readiness=false. Elapsed: 4.188612659s
Aug 11 12:09:16.013: INFO: Pod "client-containers-7014ed40-8cc3-4462-90af-1ce3c1e69711": Phase="Pending", Reason="", readiness=false. Elapsed: 6.650726314s
Aug 11 12:09:18.017: INFO: Pod "client-containers-7014ed40-8cc3-4462-90af-1ce3c1e69711": Phase="Running", Reason="", readiness=true. Elapsed: 8.653940878s
Aug 11 12:09:20.385: INFO: Pod "client-containers-7014ed40-8cc3-4462-90af-1ce3c1e69711": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.022010371s
STEP: Saw pod success
Aug 11 12:09:20.385: INFO: Pod "client-containers-7014ed40-8cc3-4462-90af-1ce3c1e69711" satisfied condition "Succeeded or Failed"
Aug 11 12:09:20.388: INFO: Trying to get logs from node kali-worker pod client-containers-7014ed40-8cc3-4462-90af-1ce3c1e69711 container test-container: 
STEP: delete the pod
Aug 11 12:09:20.545: INFO: Waiting for pod client-containers-7014ed40-8cc3-4462-90af-1ce3c1e69711 to disappear
Aug 11 12:09:21.144: INFO: Pod client-containers-7014ed40-8cc3-4462-90af-1ce3c1e69711 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:09:21.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4996" for this suite.

• [SLOW TEST:12.581 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":55,"skipped":675,"failed":0}
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:09:21.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-map-470d4960-b6ca-47e1-9d30-5c4ec2e34b7e
STEP: Creating a pod to test consume secrets
Aug 11 12:09:22.349: INFO: Waiting up to 5m0s for pod "pod-secrets-0023f971-1300-4e4b-a50c-87b13d39a4df" in namespace "secrets-5530" to be "Succeeded or Failed"
Aug 11 12:09:22.365: INFO: Pod "pod-secrets-0023f971-1300-4e4b-a50c-87b13d39a4df": Phase="Pending", Reason="", readiness=false. Elapsed: 15.745273ms
Aug 11 12:09:24.370: INFO: Pod "pod-secrets-0023f971-1300-4e4b-a50c-87b13d39a4df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020249202s
Aug 11 12:09:26.414: INFO: Pod "pod-secrets-0023f971-1300-4e4b-a50c-87b13d39a4df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064960742s
Aug 11 12:09:28.450: INFO: Pod "pod-secrets-0023f971-1300-4e4b-a50c-87b13d39a4df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.100557443s
STEP: Saw pod success
Aug 11 12:09:28.450: INFO: Pod "pod-secrets-0023f971-1300-4e4b-a50c-87b13d39a4df" satisfied condition "Succeeded or Failed"
Aug 11 12:09:28.452: INFO: Trying to get logs from node kali-worker pod pod-secrets-0023f971-1300-4e4b-a50c-87b13d39a4df container secret-volume-test: 
STEP: delete the pod
Aug 11 12:09:29.512: INFO: Waiting for pod pod-secrets-0023f971-1300-4e4b-a50c-87b13d39a4df to disappear
Aug 11 12:09:29.600: INFO: Pod pod-secrets-0023f971-1300-4e4b-a50c-87b13d39a4df no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:09:29.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5530" for this suite.

• [SLOW TEST:8.180 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":56,"skipped":680,"failed":0}
SSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:09:29.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 11 12:09:30.260: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-807'
Aug 11 12:09:53.027: INFO: stderr: ""
Aug 11 12:09:53.027: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Aug 11 12:10:08.078: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-807 -o json'
Aug 11 12:10:08.241: INFO: stderr: ""
Aug 11 12:10:08.241: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-08-11T12:09:52Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"managedFields\": [\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:metadata\": {\n                        \"f:labels\": {\n                            \".\": {},\n                            \"f:run\": {}\n                        }\n                    },\n                    \"f:spec\": {\n                        \"f:containers\": {\n                            \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n                                \".\": {},\n                                \"f:image\": {},\n                                \"f:imagePullPolicy\": {},\n                                \"f:name\": {},\n                                \"f:resources\": {},\n                                \"f:terminationMessagePath\": {},\n                                \"f:terminationMessagePolicy\": {}\n                            }\n                        },\n                        \"f:dnsPolicy\": {},\n                        \"f:enableServiceLinks\": {},\n                        \"f:restartPolicy\": {},\n                        \"f:schedulerName\": {},\n                        \"f:securityContext\": {},\n                        \"f:terminationGracePeriodSeconds\": {}\n                    }\n                },\n                \"manager\": \"kubectl\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-08-11T12:09:52Z\"\n            },\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:status\": {\n                        \"f:conditions\": {\n                            \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            }\n                        },\n                        \"f:containerStatuses\": {},\n                        \"f:hostIP\": {},\n                        \"f:phase\": {},\n                        \"f:podIP\": {},\n                        \"f:podIPs\": {\n                            \".\": {},\n                            \"k:{\\\"ip\\\":\\\"10.244.2.64\\\"}\": {\n                                \".\": {},\n                                \"f:ip\": {}\n                            }\n                        },\n                        \"f:startTime\": {}\n                    }\n                },\n                \"manager\": \"kubelet\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-08-11T12:10:03Z\"\n            }\n        ],\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-807\",\n        \"resourceVersion\": \"8554603\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-807/pods/e2e-test-httpd-pod\",\n        \"uid\": \"b9b3ecaa-64cb-417c-a8f6-9030c81a5fc5\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-kb4nc\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"kali-worker\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-kb4nc\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-kb4nc\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-11T12:09:54Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-11T12:10:03Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-11T12:10:03Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-11T12:09:53Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://ae9737a1c6a06b368fff4b2534576687eec58ba3ccc0b8763eec4894a60464de\",\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-08-11T12:10:03Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.13\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.2.64\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.244.2.64\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-08-11T12:09:54Z\"\n    }\n}\n"
STEP: replace the image in the pod
Aug 11 12:10:08.242: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-807'
Aug 11 12:10:09.460: INFO: stderr: ""
Aug 11 12:10:09.460: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Aug 11 12:10:09.607: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-807'
Aug 11 12:10:16.040: INFO: stderr: ""
Aug 11 12:10:16.040: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:10:16.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-807" for this suite.

• [SLOW TEST:46.439 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":275,"completed":57,"skipped":686,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:10:16.047: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 11 12:10:16.148: INFO: Waiting up to 5m0s for pod "downwardapi-volume-06e277d9-e69a-4b91-b914-fc2644017e9d" in namespace "projected-2928" to be "Succeeded or Failed"
Aug 11 12:10:16.158: INFO: Pod "downwardapi-volume-06e277d9-e69a-4b91-b914-fc2644017e9d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.160999ms
Aug 11 12:10:18.548: INFO: Pod "downwardapi-volume-06e277d9-e69a-4b91-b914-fc2644017e9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.39958905s
Aug 11 12:10:20.552: INFO: Pod "downwardapi-volume-06e277d9-e69a-4b91-b914-fc2644017e9d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.404021346s
Aug 11 12:10:22.583: INFO: Pod "downwardapi-volume-06e277d9-e69a-4b91-b914-fc2644017e9d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.434998449s
Aug 11 12:10:24.607: INFO: Pod "downwardapi-volume-06e277d9-e69a-4b91-b914-fc2644017e9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.458842267s
STEP: Saw pod success
Aug 11 12:10:24.607: INFO: Pod "downwardapi-volume-06e277d9-e69a-4b91-b914-fc2644017e9d" satisfied condition "Succeeded or Failed"
Aug 11 12:10:24.610: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-06e277d9-e69a-4b91-b914-fc2644017e9d container client-container: 
STEP: delete the pod
Aug 11 12:10:25.056: INFO: Waiting for pod downwardapi-volume-06e277d9-e69a-4b91-b914-fc2644017e9d to disappear
Aug 11 12:10:25.331: INFO: Pod downwardapi-volume-06e277d9-e69a-4b91-b914-fc2644017e9d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:10:25.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2928" for this suite.

• [SLOW TEST:9.291 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":58,"skipped":701,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:10:25.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 11 12:10:26.925: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 11 12:10:29.312: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744626, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744626, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744626, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744626, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:10:31.471: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744626, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744626, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744626, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744626, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 11 12:10:34.445: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:10:35.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2190" for this suite.
STEP: Destroying namespace "webhook-2190-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.456 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":59,"skipped":710,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:10:36.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:10:46.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7845" for this suite.

• [SLOW TEST:9.244 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":60,"skipped":741,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:10:46.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:11:04.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9580" for this suite.

• [SLOW TEST:18.207 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":61,"skipped":747,"failed":0}
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:11:04.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Aug 11 12:11:21.873: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-3199 PodName:pod-sharedvolume-1099fc6f-9bf2-431b-b9e3-4084314a19bb ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 11 12:11:21.873: INFO: >>> kubeConfig: /root/.kube/config
I0811 12:11:21.903862       7 log.go:172] (0xc002d14370) (0xc0019e0640) Create stream
I0811 12:11:21.903892       7 log.go:172] (0xc002d14370) (0xc0019e0640) Stream added, broadcasting: 1
I0811 12:11:21.905805       7 log.go:172] (0xc002d14370) Reply frame received for 1
I0811 12:11:21.905836       7 log.go:172] (0xc002d14370) (0xc00123c0a0) Create stream
I0811 12:11:21.905848       7 log.go:172] (0xc002d14370) (0xc00123c0a0) Stream added, broadcasting: 3
I0811 12:11:21.906636       7 log.go:172] (0xc002d14370) Reply frame received for 3
I0811 12:11:21.906657       7 log.go:172] (0xc002d14370) (0xc00123c140) Create stream
I0811 12:11:21.906664       7 log.go:172] (0xc002d14370) (0xc00123c140) Stream added, broadcasting: 5
I0811 12:11:21.907414       7 log.go:172] (0xc002d14370) Reply frame received for 5
I0811 12:11:21.961081       7 log.go:172] (0xc002d14370) Data frame received for 3
I0811 12:11:21.961144       7 log.go:172] (0xc00123c0a0) (3) Data frame handling
I0811 12:11:21.961172       7 log.go:172] (0xc00123c0a0) (3) Data frame sent
I0811 12:11:21.961193       7 log.go:172] (0xc002d14370) Data frame received for 3
I0811 12:11:21.961202       7 log.go:172] (0xc00123c0a0) (3) Data frame handling
I0811 12:11:21.961256       7 log.go:172] (0xc002d14370) Data frame received for 5
I0811 12:11:21.961274       7 log.go:172] (0xc00123c140) (5) Data frame handling
I0811 12:11:21.962699       7 log.go:172] (0xc002d14370) Data frame received for 1
I0811 12:11:21.962719       7 log.go:172] (0xc0019e0640) (1) Data frame handling
I0811 12:11:21.962733       7 log.go:172] (0xc0019e0640) (1) Data frame sent
I0811 12:11:21.962747       7 log.go:172] (0xc002d14370) (0xc0019e0640) Stream removed, broadcasting: 1
I0811 12:11:21.962843       7 log.go:172] (0xc002d14370) (0xc0019e0640) Stream removed, broadcasting: 1
I0811 12:11:21.962855       7 log.go:172] (0xc002d14370) (0xc00123c0a0) Stream removed, broadcasting: 3
I0811 12:11:21.963003       7 log.go:172] (0xc002d14370) (0xc00123c140) Stream removed, broadcasting: 5
Aug 11 12:11:21.963: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:11:21.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0811 12:11:21.963566       7 log.go:172] (0xc002d14370) Go away received
STEP: Destroying namespace "emptydir-3199" for this suite.

• [SLOW TEST:17.726 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":62,"skipped":751,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:11:21.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Aug 11 12:11:22.131: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the sample API server.
Aug 11 12:11:22.948: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Aug 11 12:11:26.227: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744682, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744682, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744683, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744682, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:11:28.349: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744682, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744682, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744683, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744682, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:11:30.251: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744682, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744682, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744683, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744682, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:11:32.500: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744682, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744682, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744683, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744682, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:11:34.269: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744682, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744682, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744683, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744682, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:11:36.956: INFO: Waited 719.546115ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:11:38.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-3061" for this suite.

• [SLOW TEST:16.848 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":63,"skipped":758,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:11:38.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-0338bd47-4b0e-49ba-aa16-4b77bb1c4483
STEP: Creating configMap with name cm-test-opt-upd-24515e8e-6cf3-4a40-96b2-66cd2c36126a
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-0338bd47-4b0e-49ba-aa16-4b77bb1c4483
STEP: Updating configmap cm-test-opt-upd-24515e8e-6cf3-4a40-96b2-66cd2c36126a
STEP: Creating configMap with name cm-test-opt-create-65f441fd-b9e5-4203-8865-708ee874af27
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:13:05.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1241" for this suite.

• [SLOW TEST:86.264 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":64,"skipped":779,"failed":0}
SSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:13:05.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with configMap that has name projected-configmap-test-upd-09def814-a16b-4cd2-8020-e398ad5f97e5
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-09def814-a16b-4cd2-8020-e398ad5f97e5
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:13:17.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7088" for this suite.

• [SLOW TEST:12.477 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":65,"skipped":783,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:13:17.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 11 12:13:18.795: INFO: Waiting up to 5m0s for pod "pod-f0872061-6c58-4a20-a9ea-52d5f0532ec8" in namespace "emptydir-9012" to be "Succeeded or Failed"
Aug 11 12:13:19.387: INFO: Pod "pod-f0872061-6c58-4a20-a9ea-52d5f0532ec8": Phase="Pending", Reason="", readiness=false. Elapsed: 592.200946ms
Aug 11 12:13:21.686: INFO: Pod "pod-f0872061-6c58-4a20-a9ea-52d5f0532ec8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.891889602s
Aug 11 12:13:23.728: INFO: Pod "pod-f0872061-6c58-4a20-a9ea-52d5f0532ec8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.933458686s
Aug 11 12:13:25.732: INFO: Pod "pod-f0872061-6c58-4a20-a9ea-52d5f0532ec8": Phase="Running", Reason="", readiness=true. Elapsed: 6.937580437s
Aug 11 12:13:27.736: INFO: Pod "pod-f0872061-6c58-4a20-a9ea-52d5f0532ec8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.941820894s
STEP: Saw pod success
Aug 11 12:13:27.736: INFO: Pod "pod-f0872061-6c58-4a20-a9ea-52d5f0532ec8" satisfied condition "Succeeded or Failed"
Aug 11 12:13:27.739: INFO: Trying to get logs from node kali-worker2 pod pod-f0872061-6c58-4a20-a9ea-52d5f0532ec8 container test-container: 
STEP: delete the pod
Aug 11 12:13:27.911: INFO: Waiting for pod pod-f0872061-6c58-4a20-a9ea-52d5f0532ec8 to disappear
Aug 11 12:13:27.997: INFO: Pod pod-f0872061-6c58-4a20-a9ea-52d5f0532ec8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:13:27.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9012" for this suite.

• [SLOW TEST:10.443 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":66,"skipped":805,"failed":0}
SSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:13:28.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-1290
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-1290
STEP: Creating statefulset with conflicting port in namespace statefulset-1290
STEP: Waiting until pod test-pod will start running in namespace statefulset-1290
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1290
Aug 11 12:13:34.303: INFO: Observed stateful pod in namespace: statefulset-1290, name: ss-0, uid: f499c4c4-da93-497f-833f-1584f5c52764, status phase: Pending. Waiting for statefulset controller to delete.
Aug 11 12:13:34.372: INFO: Observed stateful pod in namespace: statefulset-1290, name: ss-0, uid: f499c4c4-da93-497f-833f-1584f5c52764, status phase: Failed. Waiting for statefulset controller to delete.
Aug 11 12:13:34.479: INFO: Observed stateful pod in namespace: statefulset-1290, name: ss-0, uid: f499c4c4-da93-497f-833f-1584f5c52764, status phase: Failed. Waiting for statefulset controller to delete.
Aug 11 12:13:34.508: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1290
STEP: Removing pod with conflicting port in namespace statefulset-1290
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1290 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug 11 12:13:47.279: INFO: Deleting all statefulset in ns statefulset-1290
Aug 11 12:13:47.282: INFO: Scaling statefulset ss to 0
Aug 11 12:13:57.728: INFO: Waiting for statefulset status.replicas updated to 0
Aug 11 12:13:57.731: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:13:58.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1290" for this suite.

• [SLOW TEST:31.352 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":67,"skipped":811,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:13:59.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5474.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5474.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5474.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5474.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-5474.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5474.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 11 12:14:16.575: INFO: DNS probes using dns-5474/dns-test-8ee38649-d6a6-4a71-b7c5-897224be3586 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:14:17.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5474" for this suite.

• [SLOW TEST:18.077 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":68,"skipped":838,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:14:17.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-9725
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-9725
I0811 12:14:18.029617       7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-9725, replica count: 2
I0811 12:14:21.080052       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0811 12:14:24.080280       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0811 12:14:27.080507       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0811 12:14:30.080933       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 11 12:14:30.080: INFO: Creating new exec pod
Aug 11 12:14:37.299: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-9725 execpod4lzzc -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Aug 11 12:14:37.865: INFO: stderr: "I0811 12:14:37.691840     260 log.go:172] (0xc00003ac60) (0xc0006037c0) Create stream\nI0811 12:14:37.691891     260 log.go:172] (0xc00003ac60) (0xc0006037c0) Stream added, broadcasting: 1\nI0811 12:14:37.694230     260 log.go:172] (0xc00003ac60) Reply frame received for 1\nI0811 12:14:37.694265     260 log.go:172] (0xc00003ac60) (0xc00020d720) Create stream\nI0811 12:14:37.694280     260 log.go:172] (0xc00003ac60) (0xc00020d720) Stream added, broadcasting: 3\nI0811 12:14:37.695014     260 log.go:172] (0xc00003ac60) Reply frame received for 3\nI0811 12:14:37.695038     260 log.go:172] (0xc00003ac60) (0xc0008f2000) Create stream\nI0811 12:14:37.695047     260 log.go:172] (0xc00003ac60) (0xc0008f2000) Stream added, broadcasting: 5\nI0811 12:14:37.695677     260 log.go:172] (0xc00003ac60) Reply frame received for 5\nI0811 12:14:37.762640     260 log.go:172] (0xc00003ac60) Data frame received for 5\nI0811 12:14:37.762672     260 log.go:172] (0xc0008f2000) (5) Data frame handling\nI0811 12:14:37.762696     260 log.go:172] (0xc0008f2000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0811 12:14:37.856376     260 log.go:172] (0xc00003ac60) Data frame received for 3\nI0811 12:14:37.856443     260 log.go:172] (0xc00020d720) (3) Data frame handling\nI0811 12:14:37.856488     260 log.go:172] (0xc00003ac60) Data frame received for 5\nI0811 12:14:37.856514     260 log.go:172] (0xc0008f2000) (5) Data frame handling\nI0811 12:14:37.856550     260 log.go:172] (0xc0008f2000) (5) Data frame sent\nI0811 12:14:37.856573     260 log.go:172] (0xc00003ac60) Data frame received for 5\nI0811 12:14:37.856596     260 log.go:172] (0xc0008f2000) (5) Data frame handling\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0811 12:14:37.858838     260 log.go:172] (0xc00003ac60) Data frame received for 1\nI0811 12:14:37.858872     260 log.go:172] (0xc0006037c0) (1) Data frame handling\nI0811 12:14:37.858915     260 log.go:172] (0xc0006037c0) (1) Data frame sent\nI0811 12:14:37.858940     260 log.go:172] (0xc00003ac60) (0xc0006037c0) Stream removed, broadcasting: 1\nI0811 12:14:37.858968     260 log.go:172] (0xc00003ac60) Go away received\nI0811 12:14:37.859423     260 log.go:172] (0xc00003ac60) (0xc0006037c0) Stream removed, broadcasting: 1\nI0811 12:14:37.859443     260 log.go:172] (0xc00003ac60) (0xc00020d720) Stream removed, broadcasting: 3\nI0811 12:14:37.859454     260 log.go:172] (0xc00003ac60) (0xc0008f2000) Stream removed, broadcasting: 5\n"
Aug 11 12:14:37.865: INFO: stdout: ""
Aug 11 12:14:37.866: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-9725 execpod4lzzc -- /bin/sh -x -c nc -zv -t -w 2 10.110.101.228 80'
Aug 11 12:14:38.057: INFO: stderr: "I0811 12:14:37.983755     280 log.go:172] (0xc000537ad0) (0xc0009b23c0) Create stream\nI0811 12:14:37.983799     280 log.go:172] (0xc000537ad0) (0xc0009b23c0) Stream added, broadcasting: 1\nI0811 12:14:37.986632     280 log.go:172] (0xc000537ad0) Reply frame received for 1\nI0811 12:14:37.986683     280 log.go:172] (0xc000537ad0) (0xc0005a7680) Create stream\nI0811 12:14:37.986719     280 log.go:172] (0xc000537ad0) (0xc0005a7680) Stream added, broadcasting: 3\nI0811 12:14:37.987403     280 log.go:172] (0xc000537ad0) Reply frame received for 3\nI0811 12:14:37.987439     280 log.go:172] (0xc000537ad0) (0xc00038caa0) Create stream\nI0811 12:14:37.987451     280 log.go:172] (0xc000537ad0) (0xc00038caa0) Stream added, broadcasting: 5\nI0811 12:14:37.988456     280 log.go:172] (0xc000537ad0) Reply frame received for 5\nI0811 12:14:38.051286     280 log.go:172] (0xc000537ad0) Data frame received for 3\nI0811 12:14:38.051302     280 log.go:172] (0xc0005a7680) (3) Data frame handling\nI0811 12:14:38.051352     280 log.go:172] (0xc000537ad0) Data frame received for 5\nI0811 12:14:38.051378     280 log.go:172] (0xc00038caa0) (5) Data frame handling\nI0811 12:14:38.051402     280 log.go:172] (0xc00038caa0) (5) Data frame sent\nI0811 12:14:38.051418     280 log.go:172] (0xc000537ad0) Data frame received for 5\nI0811 12:14:38.051427     280 log.go:172] (0xc00038caa0) (5) Data frame handling\n+ nc -zv -t -w 2 10.110.101.228 80\nConnection to 10.110.101.228 80 port [tcp/http] succeeded!\nI0811 12:14:38.052503     280 log.go:172] (0xc000537ad0) Data frame received for 1\nI0811 12:14:38.052513     280 log.go:172] (0xc0009b23c0) (1) Data frame handling\nI0811 12:14:38.052519     280 log.go:172] (0xc0009b23c0) (1) Data frame sent\nI0811 12:14:38.052525     280 log.go:172] (0xc000537ad0) (0xc0009b23c0) Stream removed, broadcasting: 1\nI0811 12:14:38.052802     280 log.go:172] (0xc000537ad0) (0xc0009b23c0) Stream removed, broadcasting: 1\nI0811 12:14:38.052817     280 log.go:172] (0xc000537ad0) (0xc0005a7680) Stream removed, broadcasting: 3\nI0811 12:14:38.052875     280 log.go:172] (0xc000537ad0) Go away received\nI0811 12:14:38.052923     280 log.go:172] (0xc000537ad0) (0xc00038caa0) Stream removed, broadcasting: 5\n"
Aug 11 12:14:38.057: INFO: stdout: ""
Aug 11 12:14:38.057: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-9725 execpod4lzzc -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 31383'
Aug 11 12:14:38.379: INFO: stderr: "I0811 12:14:38.268695     302 log.go:172] (0xc000792b00) (0xc0007823c0) Create stream\nI0811 12:14:38.268819     302 log.go:172] (0xc000792b00) (0xc0007823c0) Stream added, broadcasting: 1\nI0811 12:14:38.272310     302 log.go:172] (0xc000792b00) Reply frame received for 1\nI0811 12:14:38.272329     302 log.go:172] (0xc000792b00) (0xc0005fe460) Create stream\nI0811 12:14:38.272339     302 log.go:172] (0xc000792b00) (0xc0005fe460) Stream added, broadcasting: 3\nI0811 12:14:38.273343     302 log.go:172] (0xc000792b00) Reply frame received for 3\nI0811 12:14:38.273400     302 log.go:172] (0xc000792b00) (0xc000702000) Create stream\nI0811 12:14:38.273482     302 log.go:172] (0xc000792b00) (0xc000702000) Stream added, broadcasting: 5\nI0811 12:14:38.274366     302 log.go:172] (0xc000792b00) Reply frame received for 5\nI0811 12:14:38.327475     302 log.go:172] (0xc000792b00) Data frame received for 5\nI0811 12:14:38.327495     302 log.go:172] (0xc000702000) (5) Data frame handling\nI0811 12:14:38.327508     302 log.go:172] (0xc000702000) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.13 31383\nI0811 12:14:38.371845     302 log.go:172] (0xc000792b00) Data frame received for 5\nI0811 12:14:38.371888     302 log.go:172] (0xc000702000) (5) Data frame handling\nI0811 12:14:38.371902     302 log.go:172] (0xc000702000) (5) Data frame sent\nI0811 12:14:38.371912     302 log.go:172] (0xc000792b00) Data frame received for 5\nI0811 12:14:38.371920     302 log.go:172] (0xc000702000) (5) Data frame handling\nConnection to 172.18.0.13 31383 port [tcp/31383] succeeded!\nI0811 12:14:38.371941     302 log.go:172] (0xc000792b00) Data frame received for 3\nI0811 12:14:38.371956     302 log.go:172] (0xc0005fe460) (3) Data frame handling\nI0811 12:14:38.373613     302 log.go:172] (0xc000792b00) Data frame received for 1\nI0811 12:14:38.373633     302 log.go:172] (0xc0007823c0) (1) Data frame handling\nI0811 12:14:38.373652     302 log.go:172] (0xc0007823c0) (1) Data frame sent\nI0811 12:14:38.373696     302 log.go:172] (0xc000792b00) (0xc0007823c0) Stream removed, broadcasting: 1\nI0811 12:14:38.373973     302 log.go:172] (0xc000792b00) Go away received\nI0811 12:14:38.374005     302 log.go:172] (0xc000792b00) (0xc0007823c0) Stream removed, broadcasting: 1\nI0811 12:14:38.374020     302 log.go:172] (0xc000792b00) (0xc0005fe460) Stream removed, broadcasting: 3\nI0811 12:14:38.374028     302 log.go:172] (0xc000792b00) (0xc000702000) Stream removed, broadcasting: 5\n"
Aug 11 12:14:38.379: INFO: stdout: ""
Aug 11 12:14:38.379: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-9725 execpod4lzzc -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 31383'
Aug 11 12:14:38.568: INFO: stderr: "I0811 12:14:38.504432     321 log.go:172] (0xc00003a580) (0xc000208aa0) Create stream\nI0811 12:14:38.504478     321 log.go:172] (0xc00003a580) (0xc000208aa0) Stream added, broadcasting: 1\nI0811 12:14:38.506436     321 log.go:172] (0xc00003a580) Reply frame received for 1\nI0811 12:14:38.506470     321 log.go:172] (0xc00003a580) (0xc0008e0000) Create stream\nI0811 12:14:38.506478     321 log.go:172] (0xc00003a580) (0xc0008e0000) Stream added, broadcasting: 3\nI0811 12:14:38.507027     321 log.go:172] (0xc00003a580) Reply frame received for 3\nI0811 12:14:38.507049     321 log.go:172] (0xc00003a580) (0xc0008e00a0) Create stream\nI0811 12:14:38.507062     321 log.go:172] (0xc00003a580) (0xc0008e00a0) Stream added, broadcasting: 5\nI0811 12:14:38.507621     321 log.go:172] (0xc00003a580) Reply frame received for 5\nI0811 12:14:38.562436     321 log.go:172] (0xc00003a580) Data frame received for 3\nI0811 12:14:38.562470     321 log.go:172] (0xc0008e0000) (3) Data frame handling\nI0811 12:14:38.562500     321 log.go:172] (0xc00003a580) Data frame received for 5\nI0811 12:14:38.562514     321 log.go:172] (0xc0008e00a0) (5) Data frame handling\nI0811 12:14:38.562527     321 log.go:172] (0xc0008e00a0) (5) Data frame sent\nI0811 12:14:38.562549     321 log.go:172] (0xc00003a580) Data frame received for 5\nI0811 12:14:38.562562     321 log.go:172] (0xc0008e00a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.15 31383\nConnection to 172.18.0.15 31383 port [tcp/31383] succeeded!\nI0811 12:14:38.563565     321 log.go:172] (0xc00003a580) Data frame received for 1\nI0811 12:14:38.563576     321 log.go:172] (0xc000208aa0) (1) Data frame handling\nI0811 12:14:38.563585     321 log.go:172] (0xc000208aa0) (1) Data frame sent\nI0811 12:14:38.563594     321 log.go:172] (0xc00003a580) (0xc000208aa0) Stream removed, broadcasting: 1\nI0811 12:14:38.563630     321 log.go:172] (0xc00003a580) Go away received\nI0811 12:14:38.563875     321 log.go:172] (0xc00003a580) (0xc000208aa0) Stream removed, broadcasting: 1\nI0811 12:14:38.563887     321 log.go:172] (0xc00003a580) (0xc0008e0000) Stream removed, broadcasting: 3\nI0811 12:14:38.563893     321 log.go:172] (0xc00003a580) (0xc0008e00a0) Stream removed, broadcasting: 5\n"
Aug 11 12:14:38.568: INFO: stdout: ""
Aug 11 12:14:38.568: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:14:38.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9725" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:21.494 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":69,"skipped":851,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:14:38.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 11 12:14:43.207: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 11 12:14:45.513: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744883, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744883, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744883, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744881, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:14:47.745: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744883, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744883, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744883, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744881, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:14:49.516: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744883, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744883, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744883, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744881, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 11 12:14:52.691: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:14:56.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3233" for this suite.
STEP: Destroying namespace "webhook-3233-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:17.761 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":70,"skipped":885,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:14:56.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 11 12:14:57.600: INFO: Waiting up to 5m0s for pod "pod-19b4ab2d-d22b-46fb-9a6a-de7da8817b02" in namespace "emptydir-2688" to be "Succeeded or Failed"
Aug 11 12:14:57.816: INFO: Pod "pod-19b4ab2d-d22b-46fb-9a6a-de7da8817b02": Phase="Pending", Reason="", readiness=false. Elapsed: 216.539364ms
Aug 11 12:14:59.821: INFO: Pod "pod-19b4ab2d-d22b-46fb-9a6a-de7da8817b02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220962785s
Aug 11 12:15:01.961: INFO: Pod "pod-19b4ab2d-d22b-46fb-9a6a-de7da8817b02": Phase="Pending", Reason="", readiness=false. Elapsed: 4.361242892s
Aug 11 12:15:04.003: INFO: Pod "pod-19b4ab2d-d22b-46fb-9a6a-de7da8817b02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.403260301s
STEP: Saw pod success
Aug 11 12:15:04.003: INFO: Pod "pod-19b4ab2d-d22b-46fb-9a6a-de7da8817b02" satisfied condition "Succeeded or Failed"
Aug 11 12:15:04.007: INFO: Trying to get logs from node kali-worker pod pod-19b4ab2d-d22b-46fb-9a6a-de7da8817b02 container test-container: 
STEP: delete the pod
Aug 11 12:15:04.163: INFO: Waiting for pod pod-19b4ab2d-d22b-46fb-9a6a-de7da8817b02 to disappear
Aug 11 12:15:04.188: INFO: Pod pod-19b4ab2d-d22b-46fb-9a6a-de7da8817b02 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:15:04.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2688" for this suite.

• [SLOW TEST:7.653 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":71,"skipped":905,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:15:04.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Aug 11 12:15:25.336: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2909 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 11 12:15:25.337: INFO: >>> kubeConfig: /root/.kube/config
I0811 12:15:25.359318       7 log.go:172] (0xc002da2160) (0xc00175db80) Create stream
I0811 12:15:25.359343       7 log.go:172] (0xc002da2160) (0xc00175db80) Stream added, broadcasting: 1
I0811 12:15:25.361043       7 log.go:172] (0xc002da2160) Reply frame received for 1
I0811 12:15:25.361074       7 log.go:172] (0xc002da2160) (0xc00189cbe0) Create stream
I0811 12:15:25.361085       7 log.go:172] (0xc002da2160) (0xc00189cbe0) Stream added, broadcasting: 3
I0811 12:15:25.361835       7 log.go:172] (0xc002da2160) Reply frame received for 3
I0811 12:15:25.361861       7 log.go:172] (0xc002da2160) (0xc00189cd20) Create stream
I0811 12:15:25.361869       7 log.go:172] (0xc002da2160) (0xc00189cd20) Stream added, broadcasting: 5
I0811 12:15:25.362598       7 log.go:172] (0xc002da2160) Reply frame received for 5
I0811 12:15:25.433962       7 log.go:172] (0xc002da2160) Data frame received for 3
I0811 12:15:25.434001       7 log.go:172] (0xc00189cbe0) (3) Data frame handling
I0811 12:15:25.434016       7 log.go:172] (0xc00189cbe0) (3) Data frame sent
I0811 12:15:25.434024       7 log.go:172] (0xc002da2160) Data frame received for 3
I0811 12:15:25.434035       7 log.go:172] (0xc00189cbe0) (3) Data frame handling
I0811 12:15:25.434065       7 log.go:172] (0xc002da2160) Data frame received for 5
I0811 12:15:25.434082       7 log.go:172] (0xc00189cd20) (5) Data frame handling
I0811 12:15:25.435489       7 log.go:172] (0xc002da2160) Data frame received for 1
I0811 12:15:25.435507       7 log.go:172] (0xc00175db80) (1) Data frame handling
I0811 12:15:25.435519       7 log.go:172] (0xc00175db80) (1) Data frame sent
I0811 12:15:25.435533       7 log.go:172] (0xc002da2160) (0xc00175db80) Stream removed, broadcasting: 1
I0811 12:15:25.435599       7 log.go:172] (0xc002da2160) (0xc00175db80) Stream removed, broadcasting: 1
I0811 12:15:25.435617       7 log.go:172] (0xc002da2160) (0xc00189cbe0) Stream removed, broadcasting: 3
I0811 12:15:25.435722       7 log.go:172] (0xc002da2160) Go away received
I0811 12:15:25.435760       7 log.go:172] (0xc002da2160) (0xc00189cd20) Stream removed, broadcasting: 5
Aug 11 12:15:25.435: INFO: Exec stderr: ""
Aug 11 12:15:25.435: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2909 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 11 12:15:25.435: INFO: >>> kubeConfig: /root/.kube/config
I0811 12:15:25.466014       7 log.go:172] (0xc0027ff4a0) (0xc0010848c0) Create stream
I0811 12:15:25.466041       7 log.go:172] (0xc0027ff4a0) (0xc0010848c0) Stream added, broadcasting: 1
I0811 12:15:25.468017       7 log.go:172] (0xc0027ff4a0) Reply frame received for 1
I0811 12:15:25.468064       7 log.go:172] (0xc0027ff4a0) (0xc0004b6a00) Create stream
I0811 12:15:25.468079       7 log.go:172] (0xc0027ff4a0) (0xc0004b6a00) Stream added, broadcasting: 3
I0811 12:15:25.469128       7 log.go:172] (0xc0027ff4a0) Reply frame received for 3
I0811 12:15:25.469168       7 log.go:172] (0xc0027ff4a0) (0xc0004b6aa0) Create stream
I0811 12:15:25.469186       7 log.go:172] (0xc0027ff4a0) (0xc0004b6aa0) Stream added, broadcasting: 5
I0811 12:15:25.470017       7 log.go:172] (0xc0027ff4a0) Reply frame received for 5
I0811 12:15:25.541290       7 log.go:172] (0xc0027ff4a0) Data frame received for 3
I0811 12:15:25.541320       7 log.go:172] (0xc0004b6a00) (3) Data frame handling
I0811 12:15:25.541336       7 log.go:172] (0xc0004b6a00) (3) Data frame sent
I0811 12:15:25.541350       7 log.go:172] (0xc0027ff4a0) Data frame received for 3
I0811 12:15:25.541368       7 log.go:172] (0xc0004b6a00) (3) Data frame handling
I0811 12:15:25.541398       7 log.go:172] (0xc0027ff4a0) Data frame received for 5
I0811 12:15:25.541410       7 log.go:172] (0xc0004b6aa0) (5) Data frame handling
I0811 12:15:25.543166       7 log.go:172] (0xc0027ff4a0) Data frame received for 1
I0811 12:15:25.543225       7 log.go:172] (0xc0010848c0) (1) Data frame handling
I0811 12:15:25.543249       7 log.go:172] (0xc0010848c0) (1) Data frame sent
I0811 12:15:25.543895       7 log.go:172] (0xc0027ff4a0) (0xc0010848c0) Stream removed, broadcasting: 1
I0811 12:15:25.543977       7 log.go:172] (0xc0027ff4a0) (0xc0010848c0) Stream removed, broadcasting: 1
I0811 12:15:25.543990       7 log.go:172] (0xc0027ff4a0) (0xc0004b6a00) Stream removed, broadcasting: 3
I0811 12:15:25.544000       7 log.go:172] (0xc0027ff4a0) (0xc0004b6aa0) Stream removed, broadcasting: 5
Aug 11 12:15:25.544: INFO: Exec stderr: ""
Aug 11 12:15:25.544: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2909 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
I0811 12:15:25.544039       7 log.go:172] (0xc0027ff4a0) Go away received
Aug 11 12:15:25.544: INFO: >>> kubeConfig: /root/.kube/config
I0811 12:15:25.567561       7 log.go:172] (0xc0027ffad0) (0xc001084d20) Create stream
I0811 12:15:25.567590       7 log.go:172] (0xc0027ffad0) (0xc001084d20) Stream added, broadcasting: 1
I0811 12:15:25.569344       7 log.go:172] (0xc0027ffad0) Reply frame received for 1
I0811 12:15:25.569379       7 log.go:172] (0xc0027ffad0) (0xc001084fa0) Create stream
I0811 12:15:25.569395       7 log.go:172] (0xc0027ffad0) (0xc001084fa0) Stream added, broadcasting: 3
I0811 12:15:25.570116       7 log.go:172] (0xc0027ffad0) Reply frame received for 3
I0811 12:15:25.570144       7 log.go:172] (0xc0027ffad0) (0xc00189ce60) Create stream
I0811 12:15:25.570154       7 log.go:172] (0xc0027ffad0) (0xc00189ce60) Stream added, broadcasting: 5
I0811 12:15:25.570848       7 log.go:172] (0xc0027ffad0) Reply frame received for 5
I0811 12:15:25.625843       7 log.go:172] (0xc0027ffad0) Data frame received for 5
I0811 12:15:25.625864       7 log.go:172] (0xc00189ce60) (5) Data frame handling
I0811 12:15:25.625884       7 log.go:172] (0xc0027ffad0) Data frame received for 3
I0811 12:15:25.625892       7 log.go:172] (0xc001084fa0) (3) Data frame handling
I0811 12:15:25.625898       7 log.go:172] (0xc001084fa0) (3) Data frame sent
I0811 12:15:25.625903       7 log.go:172] (0xc0027ffad0) Data frame received for 3
I0811 12:15:25.625906       7 log.go:172] (0xc001084fa0) (3) Data frame handling
I0811 12:15:25.627309       7 log.go:172] (0xc0027ffad0) Data frame received for 1
I0811 12:15:25.627327       7 log.go:172] (0xc001084d20) (1) Data frame handling
I0811 12:15:25.627334       7 log.go:172] (0xc001084d20) (1) Data frame sent
I0811 12:15:25.627341       7 log.go:172] (0xc0027ffad0) (0xc001084d20) Stream removed, broadcasting: 1
I0811 12:15:25.627408       7 log.go:172] (0xc0027ffad0) (0xc001084d20) Stream removed, broadcasting: 1
I0811 12:15:25.627417       7 log.go:172] (0xc0027ffad0) (0xc001084fa0) Stream removed, broadcasting: 3
I0811 12:15:25.627424       7 log.go:172] (0xc0027ffad0) (0xc00189ce60) Stream removed, broadcasting: 5
Aug 11 12:15:25.627: INFO: Exec stderr: ""
I0811 12:15:25.627441       7 log.go:172] (0xc0027ffad0) Go away received
Aug 11 12:15:25.627: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2909 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 11 12:15:25.627: INFO: >>> kubeConfig: /root/.kube/config
I0811 12:15:25.652502       7 log.go:172] (0xc002b2e160) (0xc0010857c0) Create stream
I0811 12:15:25.652525       7 log.go:172] (0xc002b2e160) (0xc0010857c0) Stream added, broadcasting: 1
I0811 12:15:25.654223       7 log.go:172] (0xc002b2e160) Reply frame received for 1
I0811 12:15:25.654251       7 log.go:172] (0xc002b2e160) (0xc0012fa140) Create stream
I0811 12:15:25.654260       7 log.go:172] (0xc002b2e160) (0xc0012fa140) Stream added, broadcasting: 3
I0811 12:15:25.654894       7 log.go:172] (0xc002b2e160) Reply frame received for 3
I0811 12:15:25.654923       7 log.go:172] (0xc002b2e160) (0xc001085860) Create stream
I0811 12:15:25.654934       7 log.go:172] (0xc002b2e160) (0xc001085860) Stream added, broadcasting: 5
I0811 12:15:25.655574       7 log.go:172] (0xc002b2e160) Reply frame received for 5
I0811 12:15:25.719736       7 log.go:172] (0xc002b2e160) Data frame received for 3
I0811 12:15:25.719775       7 log.go:172] (0xc0012fa140) (3) Data frame handling
I0811 12:15:25.719787       7 log.go:172] (0xc0012fa140) (3) Data frame sent
I0811 12:15:25.719820       7 log.go:172] (0xc002b2e160) Data frame received for 5
I0811 12:15:25.719830       7 log.go:172] (0xc001085860) (5) Data frame handling
I0811 12:15:25.719999       7 log.go:172] (0xc002b2e160) Data frame received for 3
I0811 12:15:25.720080       7 log.go:172] (0xc0012fa140) (3) Data frame handling
I0811 12:15:25.721772       7 log.go:172] (0xc002b2e160) Data frame received for 1
I0811 12:15:25.721796       7 log.go:172] (0xc0010857c0) (1) Data frame handling
I0811 12:15:25.721811       7 log.go:172] (0xc0010857c0) (1) Data frame sent
I0811 12:15:25.721824       7 log.go:172] (0xc002b2e160) (0xc0010857c0) Stream removed, broadcasting: 1
I0811 12:15:25.721912       7 log.go:172] (0xc002b2e160) (0xc0010857c0) Stream removed, broadcasting: 1
I0811 12:15:25.721923       7 log.go:172] (0xc002b2e160) (0xc0012fa140) Stream removed, broadcasting: 3
I0811 12:15:25.721931       7 log.go:172] (0xc002b2e160) (0xc001085860) Stream removed, broadcasting: 5
Aug 11 12:15:25.721: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Aug 11 12:15:25.721: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2909 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 11 12:15:25.721: INFO: >>> kubeConfig: /root/.kube/config
I0811 12:15:25.723362       7 log.go:172] (0xc002b2e160) Go away received
I0811 12:15:25.751107       7 log.go:172] (0xc002da2580) (0xc00166e000) Create stream
I0811 12:15:25.751137       7 log.go:172] (0xc002da2580) (0xc00166e000) Stream added, broadcasting: 1
I0811 12:15:25.753862       7 log.go:172] (0xc002da2580) Reply frame received for 1
I0811 12:15:25.753902       7 log.go:172] (0xc002da2580) (0xc00166e0a0) Create stream
I0811 12:15:25.753916       7 log.go:172] (0xc002da2580) (0xc00166e0a0) Stream added, broadcasting: 3
I0811 12:15:25.754789       7 log.go:172] (0xc002da2580) Reply frame received for 3
I0811 12:15:25.754831       7 log.go:172] (0xc002da2580) (0xc0004b6dc0) Create stream
I0811 12:15:25.754845       7 log.go:172] (0xc002da2580) (0xc0004b6dc0) Stream added, broadcasting: 5
I0811 12:15:25.755831       7 log.go:172] (0xc002da2580) Reply frame received for 5
I0811 12:15:25.804689       7 log.go:172] (0xc002da2580) Data frame received for 5
I0811 12:15:25.804705       7 log.go:172] (0xc0004b6dc0) (5) Data frame handling
I0811 12:15:25.804894       7 log.go:172] (0xc002da2580) Data frame received for 3
I0811 12:15:25.804921       7 log.go:172] (0xc00166e0a0) (3) Data frame handling
I0811 12:15:25.804948       7 log.go:172] (0xc00166e0a0) (3) Data frame sent
I0811 12:15:25.805110       7 log.go:172] (0xc002da2580) Data frame received for 3
I0811 12:15:25.805120       7 log.go:172] (0xc00166e0a0) (3) Data frame handling
I0811 12:15:25.806463       7 log.go:172] (0xc002da2580) Data frame received for 1
I0811 12:15:25.806481       7 log.go:172] (0xc00166e000) (1) Data frame handling
I0811 12:15:25.806496       7 log.go:172] (0xc00166e000) (1) Data frame sent
I0811 12:15:25.806697       7 log.go:172] (0xc002da2580) (0xc00166e000) Stream removed, broadcasting: 1
I0811 12:15:25.806718       7 log.go:172] (0xc002da2580) Go away received
I0811 12:15:25.806801       7 log.go:172] (0xc002da2580) (0xc00166e000) Stream removed, broadcasting: 1
I0811 12:15:25.806823       7 log.go:172] (0xc002da2580) (0xc00166e0a0) Stream removed, broadcasting: 3
I0811 12:15:25.806835       7 log.go:172] (0xc002da2580) (0xc0004b6dc0) Stream removed, broadcasting: 5
Aug 11 12:15:25.806: INFO: Exec stderr: ""
Aug 11 12:15:25.806: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2909 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 11 12:15:25.806: INFO: >>> kubeConfig: /root/.kube/config
I0811 12:15:25.837929       7 log.go:172] (0xc002da2bb0) (0xc00166e500) Create stream
I0811 12:15:25.837975       7 log.go:172] (0xc002da2bb0) (0xc00166e500) Stream added, broadcasting: 1
I0811 12:15:25.840448       7 log.go:172] (0xc002da2bb0) Reply frame received for 1
I0811 12:15:25.840486       7 log.go:172] (0xc002da2bb0) (0xc0004b7040) Create stream
I0811 12:15:25.840499       7 log.go:172] (0xc002da2bb0) (0xc0004b7040) Stream added, broadcasting: 3
I0811 12:15:25.841548       7 log.go:172] (0xc002da2bb0) Reply frame received for 3
I0811 12:15:25.841583       7 log.go:172] (0xc002da2bb0) (0xc001085b80) Create stream
I0811 12:15:25.841596       7 log.go:172] (0xc002da2bb0) (0xc001085b80) Stream added, broadcasting: 5
I0811 12:15:25.842605       7 log.go:172] (0xc002da2bb0) Reply frame received for 5
I0811 12:15:25.921500       7 log.go:172] (0xc002da2bb0) Data frame received for 5
I0811 12:15:25.921526       7 log.go:172] (0xc001085b80) (5) Data frame handling
I0811 12:15:25.921541       7 log.go:172] (0xc002da2bb0) Data frame received for 3
I0811 12:15:25.921556       7 log.go:172] (0xc0004b7040) (3) Data frame handling
I0811 12:15:25.921567       7 log.go:172] (0xc0004b7040) (3) Data frame sent
I0811 12:15:25.921574       7 log.go:172] (0xc002da2bb0) Data frame received for 3
I0811 12:15:25.921582       7 log.go:172] (0xc0004b7040) (3) Data frame handling
I0811 12:15:25.922924       7 log.go:172] (0xc002da2bb0) Data frame received for 1
I0811 12:15:25.922947       7 log.go:172] (0xc00166e500) (1) Data frame handling
I0811 12:15:25.922960       7 log.go:172] (0xc00166e500) (1) Data frame sent
I0811 12:15:25.922977       7 log.go:172] (0xc002da2bb0) (0xc00166e500) Stream removed, broadcasting: 1
I0811 12:15:25.923034       7 log.go:172] (0xc002da2bb0) Go away received
I0811 12:15:25.923076       7 log.go:172] (0xc002da2bb0) (0xc00166e500) Stream removed, broadcasting: 1
I0811 12:15:25.923093       7 log.go:172] (0xc002da2bb0) (0xc0004b7040) Stream removed, broadcasting: 3
I0811 12:15:25.923107       7 log.go:172] (0xc002da2bb0) (0xc001085b80) Stream removed, broadcasting: 5
Aug 11 12:15:25.923: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Aug 11 12:15:25.923: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2909 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 11 12:15:25.923: INFO: >>> kubeConfig: /root/.kube/config
I0811 12:15:25.948950       7 log.go:172] (0xc002b2e790) (0xc0016fa280) Create stream
I0811 12:15:25.948983       7 log.go:172] (0xc002b2e790) (0xc0016fa280) Stream added, broadcasting: 1
I0811 12:15:25.950640       7 log.go:172] (0xc002b2e790) Reply frame received for 1
I0811 12:15:25.950668       7 log.go:172] (0xc002b2e790) (0xc0012fa280) Create stream
I0811 12:15:25.950678       7 log.go:172] (0xc002b2e790) (0xc0012fa280) Stream added, broadcasting: 3
I0811 12:15:25.951433       7 log.go:172] (0xc002b2e790) Reply frame received for 3
I0811 12:15:25.951459       7 log.go:172] (0xc002b2e790) (0xc0004b72c0) Create stream
I0811 12:15:25.951468       7 log.go:172] (0xc002b2e790) (0xc0004b72c0) Stream added, broadcasting: 5
I0811 12:15:25.951998       7 log.go:172] (0xc002b2e790) Reply frame received for 5
I0811 12:15:26.010506       7 log.go:172] (0xc002b2e790) Data frame received for 5
I0811 12:15:26.010536       7 log.go:172] (0xc0004b72c0) (5) Data frame handling
I0811 12:15:26.010564       7 log.go:172] (0xc002b2e790) Data frame received for 3
I0811 12:15:26.010582       7 log.go:172] (0xc0012fa280) (3) Data frame handling
I0811 12:15:26.010593       7 log.go:172] (0xc0012fa280) (3) Data frame sent
I0811 12:15:26.010601       7 log.go:172] (0xc002b2e790) Data frame received for 3
I0811 12:15:26.010614       7 log.go:172] (0xc0012fa280) (3) Data frame handling
I0811 12:15:26.011730       7 log.go:172] (0xc002b2e790) Data frame received for 1
I0811 12:15:26.011748       7 log.go:172] (0xc0016fa280) (1) Data frame handling
I0811 12:15:26.011758       7 log.go:172] (0xc0016fa280) (1) Data frame sent
I0811 12:15:26.011767       7 log.go:172] (0xc002b2e790) (0xc0016fa280) Stream removed, broadcasting: 1
I0811 12:15:26.011813       7 log.go:172] (0xc002b2e790) (0xc0016fa280) Stream removed, broadcasting: 1
I0811 12:15:26.011825       7 log.go:172] (0xc002b2e790) (0xc0012fa280) Stream removed, broadcasting: 3
I0811 12:15:26.011837       7 log.go:172] (0xc002b2e790) (0xc0004b72c0) Stream removed, broadcasting: 5
Aug 11 12:15:26.011: INFO: Exec stderr: ""
Aug 11 12:15:26.011: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2909 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 11 12:15:26.011: INFO: >>> kubeConfig: /root/.kube/config
I0811 12:15:26.013646       7 log.go:172] (0xc002b2e790) Go away received
I0811 12:15:26.032957       7 log.go:172] (0xc002d14420) (0xc0012fa5a0) Create stream
I0811 12:15:26.032984       7 log.go:172] (0xc002d14420) (0xc0012fa5a0) Stream added, broadcasting: 1
I0811 12:15:26.034957       7 log.go:172] (0xc002d14420) Reply frame received for 1
I0811 12:15:26.035002       7 log.go:172] (0xc002d14420) (0xc0012fa6e0) Create stream
I0811 12:15:26.035021       7 log.go:172] (0xc002d14420) (0xc0012fa6e0) Stream added, broadcasting: 3
I0811 12:15:26.035833       7 log.go:172] (0xc002d14420) Reply frame received for 3
I0811 12:15:26.035859       7 log.go:172] (0xc002d14420) (0xc0016fa3c0) Create stream
I0811 12:15:26.035868       7 log.go:172] (0xc002d14420) (0xc0016fa3c0) Stream added, broadcasting: 5
I0811 12:15:26.036666       7 log.go:172] (0xc002d14420) Reply frame received for 5
I0811 12:15:26.092623       7 log.go:172] (0xc002d14420) Data frame received for 5
I0811 12:15:26.092666       7 log.go:172] (0xc0016fa3c0) (5) Data frame handling
I0811 12:15:26.092692       7 log.go:172] (0xc002d14420) Data frame received for 3
I0811 12:15:26.092706       7 log.go:172] (0xc0012fa6e0) (3) Data frame handling
I0811 12:15:26.092718       7 log.go:172] (0xc0012fa6e0) (3) Data frame sent
I0811 12:15:26.092932       7 log.go:172] (0xc002d14420) Data frame received for 3
I0811 12:15:26.092945       7 log.go:172] (0xc0012fa6e0) (3) Data frame handling
I0811 12:15:26.094372       7 log.go:172] (0xc002d14420) Data frame received for 1
I0811 12:15:26.094396       7 log.go:172] (0xc0012fa5a0) (1) Data frame handling
I0811 12:15:26.094408       7 log.go:172] (0xc0012fa5a0) (1) Data frame sent
I0811 12:15:26.094425       7 log.go:172] (0xc002d14420) (0xc0012fa5a0) Stream removed, broadcasting: 1
I0811 12:15:26.094529       7 log.go:172] (0xc002d14420) (0xc0012fa5a0) Stream removed, broadcasting: 1
I0811 12:15:26.094548       7 log.go:172] (0xc002d14420) (0xc0012fa6e0) Stream removed, broadcasting: 3
I0811 12:15:26.094562       7 log.go:172] (0xc002d14420) (0xc0016fa3c0) Stream removed, broadcasting: 5
Aug 11 12:15:26.094: INFO: Exec stderr: ""
Aug 11 12:15:26.094: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2909 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 11 12:15:26.094: INFO: >>> kubeConfig: /root/.kube/config
I0811 12:15:26.094666       7 log.go:172] (0xc002d14420) Go away received
I0811 12:15:26.127963       7 log.go:172] (0xc002d62790) (0xc0004b7cc0) Create stream
I0811 12:15:26.128003       7 log.go:172] (0xc002d62790) (0xc0004b7cc0) Stream added, broadcasting: 1
I0811 12:15:26.132486       7 log.go:172] (0xc002d62790) Reply frame received for 1
I0811 12:15:26.132519       7 log.go:172] (0xc002d62790) (0xc000c04320) Create stream
I0811 12:15:26.132526       7 log.go:172] (0xc002d62790) (0xc000c04320) Stream added, broadcasting: 3
I0811 12:15:26.133673       7 log.go:172] (0xc002d62790) Reply frame received for 3
I0811 12:15:26.133719       7 log.go:172] (0xc002d62790) (0xc0012fa960) Create stream
I0811 12:15:26.133740       7 log.go:172] (0xc002d62790) (0xc0012fa960) Stream added, broadcasting: 5
I0811 12:15:26.134887       7 log.go:172] (0xc002d62790) Reply frame received for 5
I0811 12:15:26.190666       7 log.go:172] (0xc002d62790) Data frame received for 3
I0811 12:15:26.190692       7 log.go:172] (0xc000c04320) (3) Data frame handling
I0811 12:15:26.190703       7 log.go:172] (0xc000c04320) (3) Data frame sent
I0811 12:15:26.190710       7 log.go:172] (0xc002d62790) Data frame received for 3
I0811 12:15:26.190718       7 log.go:172] (0xc000c04320) (3) Data frame handling
I0811 12:15:26.190727       7 log.go:172] (0xc002d62790) Data frame received for 5
I0811 12:15:26.190732       7 log.go:172] (0xc0012fa960) (5) Data frame handling
I0811 12:15:26.193075       7 log.go:172] (0xc002d62790) Data frame received for 1
I0811 12:15:26.193089       7 log.go:172] (0xc0004b7cc0) (1) Data frame handling
I0811 12:15:26.193098       7 log.go:172] (0xc0004b7cc0) (1) Data frame sent
I0811 12:15:26.193109       7 log.go:172] (0xc002d62790) (0xc0004b7cc0) Stream removed, broadcasting: 1
I0811 12:15:26.193193       7 log.go:172] (0xc002d62790) (0xc0004b7cc0) Stream removed, broadcasting: 1
I0811 12:15:26.193212       7 log.go:172] (0xc002d62790) (0xc000c04320) Stream removed, broadcasting: 3
I0811 12:15:26.193325       7 log.go:172] (0xc002d62790) (0xc0012fa960) Stream removed, broadcasting: 5
Aug 11 12:15:26.193: INFO: Exec stderr: ""
Aug 11 12:15:26.193: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2909 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 11 12:15:26.193: INFO: >>> kubeConfig: /root/.kube/config
I0811 12:15:26.218615       7 log.go:172] (0xc002d14a50) (0xc0012fb0e0) Create stream
I0811 12:15:26.218640       7 log.go:172] (0xc002d14a50) (0xc0012fb0e0) Stream added, broadcasting: 1
I0811 12:15:26.221339       7 log.go:172] (0xc002d14a50) Reply frame received for 1
I0811 12:15:26.221370       7 log.go:172] (0xc002d14a50) (0xc00189cf00) Create stream
I0811 12:15:26.221381       7 log.go:172] (0xc002d14a50) (0xc00189cf00) Stream added, broadcasting: 3
I0811 12:15:26.222213       7 log.go:172] (0xc002d14a50) Reply frame received for 3
I0811 12:15:26.222247       7 log.go:172] (0xc002d14a50) (0xc00166e6e0) Create stream
I0811 12:15:26.222255       7 log.go:172] (0xc002d14a50) (0xc00166e6e0) Stream added, broadcasting: 5
I0811 12:15:26.223129       7 log.go:172] (0xc002d14a50) Reply frame received for 5
I0811 12:15:26.277457       7 log.go:172] (0xc002d14a50) Data frame received for 3
I0811 12:15:26.277508       7 log.go:172] (0xc00189cf00) (3) Data frame handling
I0811 12:15:26.277518       7 log.go:172] (0xc00189cf00) (3) Data frame sent
I0811 12:15:26.277523       7 log.go:172] (0xc002d14a50) Data frame received for 3
I0811 12:15:26.277527       7 log.go:172] (0xc00189cf00) (3) Data frame handling
I0811 12:15:26.277543       7 log.go:172] (0xc002d14a50) Data frame received for 5
I0811 12:15:26.277549       7 log.go:172] (0xc00166e6e0) (5) Data frame handling
I0811 12:15:26.278939       7 log.go:172] (0xc002d14a50) Data frame received for 1
I0811 12:15:26.278962       7 log.go:172] (0xc0012fb0e0) (1) Data frame handling
I0811 12:15:26.278985       7 log.go:172] (0xc0012fb0e0) (1) Data frame sent
I0811 12:15:26.279000       7 log.go:172] (0xc002d14a50) (0xc0012fb0e0) Stream removed, broadcasting: 1
I0811 12:15:26.279061       7 log.go:172] (0xc002d14a50) (0xc0012fb0e0) Stream removed, broadcasting: 1
I0811 12:15:26.279072       7 log.go:172] (0xc002d14a50) (0xc00189cf00) Stream removed, broadcasting: 3
I0811 12:15:26.279080       7 log.go:172] (0xc002d14a50) (0xc00166e6e0) Stream removed, broadcasting: 5
Aug 11 12:15:26.279: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:15:26.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0811 12:15:26.279397       7 log.go:172] (0xc002d14a50) Go away received
STEP: Destroying namespace "e2e-kubelet-etc-hosts-2909" for this suite.

• [SLOW TEST:21.945 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":72,"skipped":931,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:15:26.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 11 12:15:26.729: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6833'
Aug 11 12:15:26.998: INFO: stderr: ""
Aug 11 12:15:26.998: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423
Aug 11 12:15:27.228: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6833'
Aug 11 12:15:43.642: INFO: stderr: ""
Aug 11 12:15:43.642: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:15:43.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6833" for this suite.

• [SLOW TEST:17.404 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":275,"completed":73,"skipped":981,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:15:43.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-c71d8b94-4d16-4c15-b1e5-cd270e0d967f
STEP: Creating a pod to test consume secrets
Aug 11 12:15:45.200: INFO: Waiting up to 5m0s for pod "pod-secrets-77770c6c-8310-4c45-b194-ab8359b87971" in namespace "secrets-4931" to be "Succeeded or Failed"
Aug 11 12:15:45.502: INFO: Pod "pod-secrets-77770c6c-8310-4c45-b194-ab8359b87971": Phase="Pending", Reason="", readiness=false. Elapsed: 302.02986ms
Aug 11 12:15:47.505: INFO: Pod "pod-secrets-77770c6c-8310-4c45-b194-ab8359b87971": Phase="Pending", Reason="", readiness=false. Elapsed: 2.305090992s
Aug 11 12:15:49.567: INFO: Pod "pod-secrets-77770c6c-8310-4c45-b194-ab8359b87971": Phase="Pending", Reason="", readiness=false. Elapsed: 4.367005969s
Aug 11 12:15:51.569: INFO: Pod "pod-secrets-77770c6c-8310-4c45-b194-ab8359b87971": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.369559671s
STEP: Saw pod success
Aug 11 12:15:51.569: INFO: Pod "pod-secrets-77770c6c-8310-4c45-b194-ab8359b87971" satisfied condition "Succeeded or Failed"
Aug 11 12:15:51.571: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-77770c6c-8310-4c45-b194-ab8359b87971 container secret-volume-test: 
STEP: delete the pod
Aug 11 12:15:51.685: INFO: Waiting for pod pod-secrets-77770c6c-8310-4c45-b194-ab8359b87971 to disappear
Aug 11 12:15:51.703: INFO: Pod pod-secrets-77770c6c-8310-4c45-b194-ab8359b87971 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:15:51.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4931" for this suite.

• [SLOW TEST:8.019 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":74,"skipped":1028,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:15:51.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 11 12:15:52.598: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 11 12:15:55.260: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744952, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744952, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744952, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744952, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:15:57.915: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744952, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744952, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744952, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744952, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:15:59.364: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744952, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744952, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744952, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744952, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:16:01.921: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744952, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744952, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744952, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732744952, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 11 12:16:04.311: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:16:04.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1972" for this suite.
STEP: Destroying namespace "webhook-1972-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:14.213 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":75,"skipped":1041,"failed":0}
SSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:16:05.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 12:16:08.114: INFO: The status of Pod test-webserver-703d26c2-18ad-4d49-b115-bb9495748fce is Pending, waiting for it to be Running (with Ready = true)
Aug 11 12:16:10.647: INFO: The status of Pod test-webserver-703d26c2-18ad-4d49-b115-bb9495748fce is Pending, waiting for it to be Running (with Ready = true)
Aug 11 12:16:12.234: INFO: The status of Pod test-webserver-703d26c2-18ad-4d49-b115-bb9495748fce is Pending, waiting for it to be Running (with Ready = true)
Aug 11 12:16:14.352: INFO: The status of Pod test-webserver-703d26c2-18ad-4d49-b115-bb9495748fce is Pending, waiting for it to be Running (with Ready = true)
Aug 11 12:16:16.124: INFO: The status of Pod test-webserver-703d26c2-18ad-4d49-b115-bb9495748fce is Running (Ready = false)
Aug 11 12:16:18.117: INFO: The status of Pod test-webserver-703d26c2-18ad-4d49-b115-bb9495748fce is Running (Ready = false)
Aug 11 12:16:20.136: INFO: The status of Pod test-webserver-703d26c2-18ad-4d49-b115-bb9495748fce is Running (Ready = false)
Aug 11 12:16:22.544: INFO: The status of Pod test-webserver-703d26c2-18ad-4d49-b115-bb9495748fce is Running (Ready = false)
Aug 11 12:16:24.119: INFO: The status of Pod test-webserver-703d26c2-18ad-4d49-b115-bb9495748fce is Running (Ready = false)
Aug 11 12:16:26.118: INFO: The status of Pod test-webserver-703d26c2-18ad-4d49-b115-bb9495748fce is Running (Ready = false)
Aug 11 12:16:28.275: INFO: The status of Pod test-webserver-703d26c2-18ad-4d49-b115-bb9495748fce is Running (Ready = false)
Aug 11 12:16:30.202: INFO: The status of Pod test-webserver-703d26c2-18ad-4d49-b115-bb9495748fce is Running (Ready = true)
Aug 11 12:16:30.204: INFO: Container started at 2020-08-11 12:16:13 +0000 UTC, pod became ready at 2020-08-11 12:16:29 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:16:30.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9997" for this suite.

• [SLOW TEST:24.353 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":76,"skipped":1047,"failed":0}
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:16:30.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Aug 11 12:16:33.693: INFO: Waiting up to 5m0s for pod "downward-api-7da3dcb1-6ae6-4f74-9718-abb042e9f5f2" in namespace "downward-api-2031" to be "Succeeded or Failed"
Aug 11 12:16:34.828: INFO: Pod "downward-api-7da3dcb1-6ae6-4f74-9718-abb042e9f5f2": Phase="Pending", Reason="", readiness=false. Elapsed: 1.135006568s
Aug 11 12:16:37.444: INFO: Pod "downward-api-7da3dcb1-6ae6-4f74-9718-abb042e9f5f2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.750493074s
Aug 11 12:16:39.701: INFO: Pod "downward-api-7da3dcb1-6ae6-4f74-9718-abb042e9f5f2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.007487505s
Aug 11 12:16:41.704: INFO: Pod "downward-api-7da3dcb1-6ae6-4f74-9718-abb042e9f5f2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011161863s
Aug 11 12:16:43.707: INFO: Pod "downward-api-7da3dcb1-6ae6-4f74-9718-abb042e9f5f2": Phase="Running", Reason="", readiness=true. Elapsed: 10.014252819s
Aug 11 12:16:45.743: INFO: Pod "downward-api-7da3dcb1-6ae6-4f74-9718-abb042e9f5f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.05014036s
STEP: Saw pod success
Aug 11 12:16:45.743: INFO: Pod "downward-api-7da3dcb1-6ae6-4f74-9718-abb042e9f5f2" satisfied condition "Succeeded or Failed"
Aug 11 12:16:45.746: INFO: Trying to get logs from node kali-worker pod downward-api-7da3dcb1-6ae6-4f74-9718-abb042e9f5f2 container dapi-container: 
STEP: delete the pod
Aug 11 12:16:46.150: INFO: Waiting for pod downward-api-7da3dcb1-6ae6-4f74-9718-abb042e9f5f2 to disappear
Aug 11 12:16:46.154: INFO: Pod downward-api-7da3dcb1-6ae6-4f74-9718-abb042e9f5f2 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:16:46.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2031" for this suite.

• [SLOW TEST:15.882 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":77,"skipped":1047,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:16:46.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 11 12:16:46.769: INFO: Waiting up to 5m0s for pod "pod-25f563e6-b658-4be7-a214-10e83487c3dc" in namespace "emptydir-2082" to be "Succeeded or Failed"
Aug 11 12:16:47.192: INFO: Pod "pod-25f563e6-b658-4be7-a214-10e83487c3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 422.246617ms
Aug 11 12:16:49.195: INFO: Pod "pod-25f563e6-b658-4be7-a214-10e83487c3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.425663368s
Aug 11 12:16:51.197: INFO: Pod "pod-25f563e6-b658-4be7-a214-10e83487c3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.428065047s
Aug 11 12:16:53.200: INFO: Pod "pod-25f563e6-b658-4be7-a214-10e83487c3dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.430779706s
STEP: Saw pod success
Aug 11 12:16:53.200: INFO: Pod "pod-25f563e6-b658-4be7-a214-10e83487c3dc" satisfied condition "Succeeded or Failed"
Aug 11 12:16:53.202: INFO: Trying to get logs from node kali-worker pod pod-25f563e6-b658-4be7-a214-10e83487c3dc container test-container: 
STEP: delete the pod
Aug 11 12:16:53.236: INFO: Waiting for pod pod-25f563e6-b658-4be7-a214-10e83487c3dc to disappear
Aug 11 12:16:53.249: INFO: Pod pod-25f563e6-b658-4be7-a214-10e83487c3dc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:16:53.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2082" for this suite.

• [SLOW TEST:7.106 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":78,"skipped":1077,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:16:53.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 12:16:53.369: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Aug 11 12:16:55.417: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:16:56.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4488" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":79,"skipped":1091,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:16:56.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 11 12:16:57.577: INFO: Waiting up to 5m0s for pod "pod-7875cd6f-345d-4ab5-a462-9fd2f536f81a" in namespace "emptydir-9332" to be "Succeeded or Failed"
Aug 11 12:16:57.650: INFO: Pod "pod-7875cd6f-345d-4ab5-a462-9fd2f536f81a": Phase="Pending", Reason="", readiness=false. Elapsed: 72.770439ms
Aug 11 12:16:59.653: INFO: Pod "pod-7875cd6f-345d-4ab5-a462-9fd2f536f81a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07588057s
Aug 11 12:17:01.658: INFO: Pod "pod-7875cd6f-345d-4ab5-a462-9fd2f536f81a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08107042s
Aug 11 12:17:03.664: INFO: Pod "pod-7875cd6f-345d-4ab5-a462-9fd2f536f81a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.086280445s
STEP: Saw pod success
Aug 11 12:17:03.664: INFO: Pod "pod-7875cd6f-345d-4ab5-a462-9fd2f536f81a" satisfied condition "Succeeded or Failed"
Aug 11 12:17:03.666: INFO: Trying to get logs from node kali-worker2 pod pod-7875cd6f-345d-4ab5-a462-9fd2f536f81a container test-container: 
STEP: delete the pod
Aug 11 12:17:03.889: INFO: Waiting for pod pod-7875cd6f-345d-4ab5-a462-9fd2f536f81a to disappear
Aug 11 12:17:03.897: INFO: Pod pod-7875cd6f-345d-4ab5-a462-9fd2f536f81a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:17:03.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9332" for this suite.

• [SLOW TEST:7.420 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":80,"skipped":1100,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:17:03.904: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 12:17:04.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Aug 11 12:17:07.309: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3905 create -f -'
Aug 11 12:17:11.009: INFO: stderr: ""
Aug 11 12:17:11.009: INFO: stdout: "e2e-test-crd-publish-openapi-746-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Aug 11 12:17:11.009: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3905 delete e2e-test-crd-publish-openapi-746-crds test-foo'
Aug 11 12:17:11.118: INFO: stderr: ""
Aug 11 12:17:11.118: INFO: stdout: "e2e-test-crd-publish-openapi-746-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Aug 11 12:17:11.118: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3905 apply -f -'
Aug 11 12:17:11.413: INFO: stderr: ""
Aug 11 12:17:11.413: INFO: stdout: "e2e-test-crd-publish-openapi-746-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Aug 11 12:17:11.413: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3905 delete e2e-test-crd-publish-openapi-746-crds test-foo'
Aug 11 12:17:11.518: INFO: stderr: ""
Aug 11 12:17:11.518: INFO: stdout: "e2e-test-crd-publish-openapi-746-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Aug 11 12:17:11.519: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3905 create -f -'
Aug 11 12:17:11.761: INFO: rc: 1
Aug 11 12:17:11.761: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3905 apply -f -'
Aug 11 12:17:12.172: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Aug 11 12:17:12.173: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3905 create -f -'
Aug 11 12:17:12.419: INFO: rc: 1
Aug 11 12:17:12.419: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3905 apply -f -'
Aug 11 12:17:12.644: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Aug 11 12:17:12.644: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-746-crds'
Aug 11 12:17:13.242: INFO: stderr: ""
Aug 11 12:17:13.242: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-746-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Aug 11 12:17:13.242: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-746-crds.metadata'
Aug 11 12:17:13.520: INFO: stderr: ""
Aug 11 12:17:13.520: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-746-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Aug 11 12:17:13.521: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-746-crds.spec'
Aug 11 12:17:14.921: INFO: stderr: ""
Aug 11 12:17:14.921: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-746-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Aug 11 12:17:14.921: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-746-crds.spec.bars'
Aug 11 12:17:15.775: INFO: stderr: ""
Aug 11 12:17:15.775: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-746-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Aug 11 12:17:15.775: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-746-crds.spec.bars2'
Aug 11 12:17:16.099: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:17:19.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3905" for this suite.

• [SLOW TEST:15.158 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":81,"skipped":1117,"failed":0}
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:17:19.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: getting the auto-created API token
STEP: reading a file in the container
Aug 11 12:17:23.756: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1385 pod-service-account-ef02e351-28bc-4370-8885-8773448c58c1 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Aug 11 12:17:23.968: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1385 pod-service-account-ef02e351-28bc-4370-8885-8773448c58c1 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Aug 11 12:17:24.193: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1385 pod-service-account-ef02e351-28bc-4370-8885-8773448c58c1 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:17:24.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-1385" for this suite.

• [SLOW TEST:5.346 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":275,"completed":82,"skipped":1117,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:17:24.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:17:55.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-8789" for this suite.
STEP: Destroying namespace "nsdeletetest-6076" for this suite.
Aug 11 12:17:55.751: INFO: Namespace nsdeletetest-6076 was already deleted
STEP: Destroying namespace "nsdeletetest-8048" for this suite.

• [SLOW TEST:31.346 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":83,"skipped":1136,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:17:55.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:18:03.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-805" for this suite.

• [SLOW TEST:7.779 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:188
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":84,"skipped":1147,"failed":0}
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:18:03.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-1267f3ba-7dc8-4a22-b459-2c117f1608af
STEP: Creating a pod to test consume configMaps
Aug 11 12:18:03.972: INFO: Waiting up to 5m0s for pod "pod-configmaps-8c462f0b-92dd-46ff-bd4a-a93b21eaf103" in namespace "configmap-8484" to be "Succeeded or Failed"
Aug 11 12:18:03.976: INFO: Pod "pod-configmaps-8c462f0b-92dd-46ff-bd4a-a93b21eaf103": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107894ms
Aug 11 12:18:05.988: INFO: Pod "pod-configmaps-8c462f0b-92dd-46ff-bd4a-a93b21eaf103": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016269368s
Aug 11 12:18:08.241: INFO: Pod "pod-configmaps-8c462f0b-92dd-46ff-bd4a-a93b21eaf103": Phase="Pending", Reason="", readiness=false. Elapsed: 4.268501624s
Aug 11 12:18:10.599: INFO: Pod "pod-configmaps-8c462f0b-92dd-46ff-bd4a-a93b21eaf103": Phase="Pending", Reason="", readiness=false. Elapsed: 6.626680928s
Aug 11 12:18:12.603: INFO: Pod "pod-configmaps-8c462f0b-92dd-46ff-bd4a-a93b21eaf103": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.63072827s
STEP: Saw pod success
Aug 11 12:18:12.603: INFO: Pod "pod-configmaps-8c462f0b-92dd-46ff-bd4a-a93b21eaf103" satisfied condition "Succeeded or Failed"
Aug 11 12:18:12.608: INFO: Trying to get logs from node kali-worker pod pod-configmaps-8c462f0b-92dd-46ff-bd4a-a93b21eaf103 container configmap-volume-test: 
STEP: delete the pod
Aug 11 12:18:12.690: INFO: Waiting for pod pod-configmaps-8c462f0b-92dd-46ff-bd4a-a93b21eaf103 to disappear
Aug 11 12:18:12.750: INFO: Pod pod-configmaps-8c462f0b-92dd-46ff-bd4a-a93b21eaf103 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:18:12.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8484" for this suite.

• [SLOW TEST:9.322 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":85,"skipped":1150,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:18:12.856: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 12:18:13.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-4098
I0811 12:18:13.188545       7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4098, replica count: 1
I0811 12:18:14.238914       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0811 12:18:15.239197       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0811 12:18:16.239470       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0811 12:18:17.239693       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 11 12:18:17.378: INFO: Created: latency-svc-wwrqt
Aug 11 12:18:17.415: INFO: Got endpoints: latency-svc-wwrqt [76.092834ms]
Aug 11 12:18:17.469: INFO: Created: latency-svc-76p5b
Aug 11 12:18:17.534: INFO: Got endpoints: latency-svc-76p5b [118.429984ms]
Aug 11 12:18:17.558: INFO: Created: latency-svc-m5rs6
Aug 11 12:18:17.583: INFO: Got endpoints: latency-svc-m5rs6 [166.922175ms]
Aug 11 12:18:17.618: INFO: Created: latency-svc-77nvv
Aug 11 12:18:17.665: INFO: Got endpoints: latency-svc-77nvv [249.597803ms]
Aug 11 12:18:17.678: INFO: Created: latency-svc-j94cl
Aug 11 12:18:17.694: INFO: Got endpoints: latency-svc-j94cl [278.073082ms]
Aug 11 12:18:17.721: INFO: Created: latency-svc-rr6xv
Aug 11 12:18:17.759: INFO: Got endpoints: latency-svc-rr6xv [343.116949ms]
Aug 11 12:18:17.834: INFO: Created: latency-svc-nftrl
Aug 11 12:18:17.876: INFO: Got endpoints: latency-svc-nftrl [460.189102ms]
Aug 11 12:18:17.936: INFO: Created: latency-svc-zrcq4
Aug 11 12:18:17.985: INFO: Got endpoints: latency-svc-zrcq4 [569.050174ms]
Aug 11 12:18:18.069: INFO: Created: latency-svc-znrrc
Aug 11 12:18:18.072: INFO: Got endpoints: latency-svc-znrrc [656.270956ms]
Aug 11 12:18:18.135: INFO: Created: latency-svc-2gxn6
Aug 11 12:18:18.154: INFO: Got endpoints: latency-svc-2gxn6 [738.876443ms]
Aug 11 12:18:18.240: INFO: Created: latency-svc-shdlt
Aug 11 12:18:18.642: INFO: Got endpoints: latency-svc-shdlt [1.226409806s]
Aug 11 12:18:18.728: INFO: Created: latency-svc-jzkpb
Aug 11 12:18:18.809: INFO: Got endpoints: latency-svc-jzkpb [1.392848363s]
Aug 11 12:18:18.872: INFO: Created: latency-svc-tgrpd
Aug 11 12:18:18.888: INFO: Got endpoints: latency-svc-tgrpd [1.472200729s]
Aug 11 12:18:18.965: INFO: Created: latency-svc-kmjv5
Aug 11 12:18:18.988: INFO: Got endpoints: latency-svc-kmjv5 [1.571562959s]
Aug 11 12:18:19.020: INFO: Created: latency-svc-szk58
Aug 11 12:18:19.030: INFO: Got endpoints: latency-svc-szk58 [1.614226036s]
Aug 11 12:18:19.054: INFO: Created: latency-svc-ptd7l
Aug 11 12:18:19.096: INFO: Got endpoints: latency-svc-ptd7l [1.680136594s]
Aug 11 12:18:19.115: INFO: Created: latency-svc-vlmzx
Aug 11 12:18:19.127: INFO: Got endpoints: latency-svc-vlmzx [1.59247908s]
Aug 11 12:18:19.148: INFO: Created: latency-svc-qrc26
Aug 11 12:18:19.181: INFO: Got endpoints: latency-svc-qrc26 [1.598669949s]
Aug 11 12:18:19.241: INFO: Created: latency-svc-k5dxf
Aug 11 12:18:19.245: INFO: Got endpoints: latency-svc-k5dxf [1.579021004s]
Aug 11 12:18:19.270: INFO: Created: latency-svc-bp5bs
Aug 11 12:18:19.304: INFO: Got endpoints: latency-svc-bp5bs [1.609910143s]
Aug 11 12:18:19.390: INFO: Created: latency-svc-hq7mc
Aug 11 12:18:19.396: INFO: Got endpoints: latency-svc-hq7mc [1.636412265s]
Aug 11 12:18:19.462: INFO: Created: latency-svc-cq82f
Aug 11 12:18:19.479: INFO: Got endpoints: latency-svc-cq82f [1.603404704s]
Aug 11 12:18:19.568: INFO: Created: latency-svc-78tt2
Aug 11 12:18:19.598: INFO: Got endpoints: latency-svc-78tt2 [1.613226264s]
Aug 11 12:18:19.701: INFO: Created: latency-svc-j8b45
Aug 11 12:18:19.720: INFO: Got endpoints: latency-svc-j8b45 [1.64738842s]
Aug 11 12:18:19.990: INFO: Created: latency-svc-67knt
Aug 11 12:18:20.055: INFO: Got endpoints: latency-svc-67knt [1.900811968s]
Aug 11 12:18:20.284: INFO: Created: latency-svc-lrgmf
Aug 11 12:18:20.295: INFO: Got endpoints: latency-svc-lrgmf [1.652209179s]
Aug 11 12:18:20.338: INFO: Created: latency-svc-s8sjr
Aug 11 12:18:20.356: INFO: Got endpoints: latency-svc-s8sjr [1.546148521s]
Aug 11 12:18:20.493: INFO: Created: latency-svc-w8btf
Aug 11 12:18:20.522: INFO: Got endpoints: latency-svc-w8btf [1.634166777s]
Aug 11 12:18:20.560: INFO: Created: latency-svc-vtczh
Aug 11 12:18:20.623: INFO: Got endpoints: latency-svc-vtczh [1.635031554s]
Aug 11 12:18:20.756: INFO: Created: latency-svc-xbznv
Aug 11 12:18:20.783: INFO: Got endpoints: latency-svc-xbznv [1.75238197s]
Aug 11 12:18:20.905: INFO: Created: latency-svc-vw5p2
Aug 11 12:18:20.910: INFO: Got endpoints: latency-svc-vw5p2 [1.813630273s]
Aug 11 12:18:20.954: INFO: Created: latency-svc-fqjtq
Aug 11 12:18:20.981: INFO: Got endpoints: latency-svc-fqjtq [1.854048263s]
Aug 11 12:18:21.049: INFO: Created: latency-svc-mzkn5
Aug 11 12:18:21.054: INFO: Got endpoints: latency-svc-mzkn5 [1.872304152s]
Aug 11 12:18:21.077: INFO: Created: latency-svc-7krr8
Aug 11 12:18:21.091: INFO: Got endpoints: latency-svc-7krr8 [1.846390518s]
Aug 11 12:18:21.140: INFO: Created: latency-svc-58szp
Aug 11 12:18:21.186: INFO: Got endpoints: latency-svc-58szp [1.882272734s]
Aug 11 12:18:21.255: INFO: Created: latency-svc-zcgdp
Aug 11 12:18:21.265: INFO: Got endpoints: latency-svc-zcgdp [1.869191792s]
Aug 11 12:18:21.345: INFO: Created: latency-svc-d9nft
Aug 11 12:18:21.356: INFO: Got endpoints: latency-svc-d9nft [1.876071138s]
Aug 11 12:18:21.375: INFO: Created: latency-svc-2vkkn
Aug 11 12:18:21.391: INFO: Got endpoints: latency-svc-2vkkn [1.792801908s]
Aug 11 12:18:21.510: INFO: Created: latency-svc-4ljzw
Aug 11 12:18:21.515: INFO: Got endpoints: latency-svc-4ljzw [1.794743834s]
Aug 11 12:18:21.560: INFO: Created: latency-svc-2hd7z
Aug 11 12:18:21.584: INFO: Got endpoints: latency-svc-2hd7z [1.528597028s]
Aug 11 12:18:21.659: INFO: Created: latency-svc-wzh4p
Aug 11 12:18:21.662: INFO: Got endpoints: latency-svc-wzh4p [1.367121251s]
Aug 11 12:18:21.718: INFO: Created: latency-svc-rnt25
Aug 11 12:18:21.736: INFO: Got endpoints: latency-svc-rnt25 [1.379805917s]
Aug 11 12:18:21.809: INFO: Created: latency-svc-7m4kt
Aug 11 12:18:21.826: INFO: Got endpoints: latency-svc-7m4kt [1.30400986s]
Aug 11 12:18:21.862: INFO: Created: latency-svc-cbvs5
Aug 11 12:18:21.880: INFO: Got endpoints: latency-svc-cbvs5 [1.257071508s]
Aug 11 12:18:21.969: INFO: Created: latency-svc-22zhp
Aug 11 12:18:21.999: INFO: Got endpoints: latency-svc-22zhp [1.216056689s]
Aug 11 12:18:22.085: INFO: Created: latency-svc-vgkcm
Aug 11 12:18:22.090: INFO: Got endpoints: latency-svc-vgkcm [1.179918494s]
Aug 11 12:18:22.139: INFO: Created: latency-svc-2kcd2
Aug 11 12:18:22.222: INFO: Got endpoints: latency-svc-2kcd2 [1.241664869s]
Aug 11 12:18:22.312: INFO: Created: latency-svc-4k62l
Aug 11 12:18:22.385: INFO: Got endpoints: latency-svc-4k62l [1.331707907s]
Aug 11 12:18:22.430: INFO: Created: latency-svc-2r8p7
Aug 11 12:18:22.451: INFO: Got endpoints: latency-svc-2r8p7 [1.360040209s]
Aug 11 12:18:22.541: INFO: Created: latency-svc-9jd8c
Aug 11 12:18:22.553: INFO: Got endpoints: latency-svc-9jd8c [1.366737221s]
Aug 11 12:18:22.592: INFO: Created: latency-svc-n6sfs
Aug 11 12:18:22.613: INFO: Got endpoints: latency-svc-n6sfs [1.348592549s]
Aug 11 12:18:22.690: INFO: Created: latency-svc-dth62
Aug 11 12:18:22.732: INFO: Got endpoints: latency-svc-dth62 [1.376662966s]
Aug 11 12:18:22.766: INFO: Created: latency-svc-g2fnb
Aug 11 12:18:22.821: INFO: Got endpoints: latency-svc-g2fnb [1.430244899s]
Aug 11 12:18:22.894: INFO: Created: latency-svc-bb488
Aug 11 12:18:22.913: INFO: Got endpoints: latency-svc-bb488 [1.398076921s]
Aug 11 12:18:22.989: INFO: Created: latency-svc-hn62n
Aug 11 12:18:23.008: INFO: Got endpoints: latency-svc-hn62n [1.423826688s]
Aug 11 12:18:23.103: INFO: Created: latency-svc-f7bdv
Aug 11 12:18:23.110: INFO: Got endpoints: latency-svc-f7bdv [1.448404948s]
Aug 11 12:18:23.145: INFO: Created: latency-svc-rd5dt
Aug 11 12:18:23.189: INFO: Got endpoints: latency-svc-rd5dt [1.453642465s]
Aug 11 12:18:23.259: INFO: Created: latency-svc-qvn8b
Aug 11 12:18:23.262: INFO: Got endpoints: latency-svc-qvn8b [1.435704824s]
Aug 11 12:18:23.307: INFO: Created: latency-svc-j8ffb
Aug 11 12:18:23.351: INFO: Got endpoints: latency-svc-j8ffb [1.471732112s]
Aug 11 12:18:23.408: INFO: Created: latency-svc-pthsf
Aug 11 12:18:23.417: INFO: Got endpoints: latency-svc-pthsf [1.418272917s]
Aug 11 12:18:23.452: INFO: Created: latency-svc-99gfv
Aug 11 12:18:23.494: INFO: Got endpoints: latency-svc-99gfv [1.403478159s]
Aug 11 12:18:24.023: INFO: Created: latency-svc-sckx2
Aug 11 12:18:24.035: INFO: Got endpoints: latency-svc-sckx2 [1.812819392s]
Aug 11 12:18:24.065: INFO: Created: latency-svc-rxkp7
Aug 11 12:18:24.290: INFO: Got endpoints: latency-svc-rxkp7 [1.904131635s]
Aug 11 12:18:24.454: INFO: Created: latency-svc-5zrkb
Aug 11 12:18:24.475: INFO: Got endpoints: latency-svc-5zrkb [439.329968ms]
Aug 11 12:18:24.524: INFO: Created: latency-svc-rkjvb
Aug 11 12:18:24.591: INFO: Got endpoints: latency-svc-rkjvb [2.140341061s]
Aug 11 12:18:24.618: INFO: Created: latency-svc-zhzhn
Aug 11 12:18:24.624: INFO: Got endpoints: latency-svc-zhzhn [2.070842303s]
Aug 11 12:18:24.647: INFO: Created: latency-svc-s5fbr
Aug 11 12:18:24.655: INFO: Got endpoints: latency-svc-s5fbr [2.041319071s]
Aug 11 12:18:24.713: INFO: Created: latency-svc-vnpdw
Aug 11 12:18:24.735: INFO: Got endpoints: latency-svc-vnpdw [2.003038351s]
Aug 11 12:18:24.859: INFO: Created: latency-svc-75kvr
Aug 11 12:18:24.893: INFO: Got endpoints: latency-svc-75kvr [2.071376403s]
Aug 11 12:18:25.001: INFO: Created: latency-svc-7xx82
Aug 11 12:18:25.010: INFO: Got endpoints: latency-svc-7xx82 [2.09686773s]
Aug 11 12:18:25.032: INFO: Created: latency-svc-xmqdr
Aug 11 12:18:25.040: INFO: Got endpoints: latency-svc-xmqdr [2.032074606s]
Aug 11 12:18:25.068: INFO: Created: latency-svc-jnzwr
Aug 11 12:18:25.077: INFO: Got endpoints: latency-svc-jnzwr [1.966226305s]
Aug 11 12:18:25.139: INFO: Created: latency-svc-m7qxr
Aug 11 12:18:25.142: INFO: Got endpoints: latency-svc-m7qxr [1.952822987s]
Aug 11 12:18:25.174: INFO: Created: latency-svc-tkbpv
Aug 11 12:18:25.188: INFO: Got endpoints: latency-svc-tkbpv [1.925868106s]
Aug 11 12:18:25.216: INFO: Created: latency-svc-jnm6l
Aug 11 12:18:25.230: INFO: Got endpoints: latency-svc-jnm6l [1.878663434s]
Aug 11 12:18:25.306: INFO: Created: latency-svc-4nfjv
Aug 11 12:18:25.327: INFO: Got endpoints: latency-svc-4nfjv [1.909912173s]
Aug 11 12:18:25.384: INFO: Created: latency-svc-8tgdn
Aug 11 12:18:25.432: INFO: Got endpoints: latency-svc-8tgdn [1.938688373s]
Aug 11 12:18:25.458: INFO: Created: latency-svc-jz6tf
Aug 11 12:18:25.472: INFO: Got endpoints: latency-svc-jz6tf [1.181944943s]
Aug 11 12:18:25.498: INFO: Created: latency-svc-4vz4n
Aug 11 12:18:25.508: INFO: Got endpoints: latency-svc-4vz4n [1.033180359s]
Aug 11 12:18:25.582: INFO: Created: latency-svc-42gk8
Aug 11 12:18:25.622: INFO: Got endpoints: latency-svc-42gk8 [1.030934883s]
Aug 11 12:18:25.662: INFO: Created: latency-svc-h7rnk
Aug 11 12:18:25.677: INFO: Got endpoints: latency-svc-h7rnk [1.0525913s]
Aug 11 12:18:25.720: INFO: Created: latency-svc-rn65r
Aug 11 12:18:25.738: INFO: Got endpoints: latency-svc-rn65r [1.082698666s]
Aug 11 12:18:25.762: INFO: Created: latency-svc-zwd6n
Aug 11 12:18:25.774: INFO: Got endpoints: latency-svc-zwd6n [1.038146643s]
Aug 11 12:18:25.875: INFO: Created: latency-svc-jnknv
Aug 11 12:18:25.879: INFO: Got endpoints: latency-svc-jnknv [986.09186ms]
Aug 11 12:18:25.906: INFO: Created: latency-svc-2rlfm
Aug 11 12:18:25.920: INFO: Got endpoints: latency-svc-2rlfm [910.321061ms]
Aug 11 12:18:25.963: INFO: Created: latency-svc-t28r5
Aug 11 12:18:25.972: INFO: Got endpoints: latency-svc-t28r5 [932.324522ms]
Aug 11 12:18:26.057: INFO: Created: latency-svc-7vb47
Aug 11 12:18:26.069: INFO: Got endpoints: latency-svc-7vb47 [992.37146ms]
Aug 11 12:18:26.094: INFO: Created: latency-svc-27n2d
Aug 11 12:18:26.180: INFO: Got endpoints: latency-svc-27n2d [1.03802896s]
Aug 11 12:18:26.200: INFO: Created: latency-svc-5bnw6
Aug 11 12:18:26.214: INFO: Got endpoints: latency-svc-5bnw6 [1.025793862s]
Aug 11 12:18:26.255: INFO: Created: latency-svc-rj6w2
Aug 11 12:18:26.313: INFO: Got endpoints: latency-svc-rj6w2 [1.08232954s]
Aug 11 12:18:26.338: INFO: Created: latency-svc-n8f8b
Aug 11 12:18:26.369: INFO: Got endpoints: latency-svc-n8f8b [1.041562515s]
Aug 11 12:18:26.404: INFO: Created: latency-svc-d89x6
Aug 11 12:18:26.443: INFO: Got endpoints: latency-svc-d89x6 [1.011155313s]
Aug 11 12:18:26.484: INFO: Created: latency-svc-nkrg7
Aug 11 12:18:26.498: INFO: Got endpoints: latency-svc-nkrg7 [1.026356847s]
Aug 11 12:18:26.544: INFO: Created: latency-svc-n75f8
Aug 11 12:18:26.582: INFO: Got endpoints: latency-svc-n75f8 [1.073554467s]
Aug 11 12:18:26.602: INFO: Created: latency-svc-l8lqd
Aug 11 12:18:26.615: INFO: Got endpoints: latency-svc-l8lqd [992.389955ms]
Aug 11 12:18:26.639: INFO: Created: latency-svc-jxp49
Aug 11 12:18:26.649: INFO: Got endpoints: latency-svc-jxp49 [972.200441ms]
Aug 11 12:18:26.687: INFO: Created: latency-svc-fspp2
Aug 11 12:18:26.748: INFO: Got endpoints: latency-svc-fspp2 [1.010529576s]
Aug 11 12:18:26.788: INFO: Created: latency-svc-mrdfs
Aug 11 12:18:26.817: INFO: Got endpoints: latency-svc-mrdfs [1.043621955s]
Aug 11 12:18:26.893: INFO: Created: latency-svc-4zcfp
Aug 11 12:18:26.901: INFO: Got endpoints: latency-svc-4zcfp [1.021992323s]
Aug 11 12:18:26.939: INFO: Created: latency-svc-6jgjp
Aug 11 12:18:26.992: INFO: Got endpoints: latency-svc-6jgjp [1.072362197s]
Aug 11 12:18:27.048: INFO: Created: latency-svc-285ww
Aug 11 12:18:27.068: INFO: Got endpoints: latency-svc-285ww [1.095336861s]
Aug 11 12:18:27.131: INFO: Created: latency-svc-tsjmr
Aug 11 12:18:27.169: INFO: Got endpoints: latency-svc-tsjmr [1.09945346s]
Aug 11 12:18:27.223: INFO: Created: latency-svc-jz5d4
Aug 11 12:18:27.230: INFO: Got endpoints: latency-svc-jz5d4 [1.049701752s]
Aug 11 12:18:27.256: INFO: Created: latency-svc-rkncx
Aug 11 12:18:27.300: INFO: Got endpoints: latency-svc-rkncx [1.085844559s]
Aug 11 12:18:27.316: INFO: Created: latency-svc-twhc8
Aug 11 12:18:27.334: INFO: Got endpoints: latency-svc-twhc8 [1.021033518s]
Aug 11 12:18:27.359: INFO: Created: latency-svc-wbzqk
Aug 11 12:18:27.375: INFO: Got endpoints: latency-svc-wbzqk [1.006661397s]
Aug 11 12:18:27.450: INFO: Created: latency-svc-vkj7h
Aug 11 12:18:27.455: INFO: Got endpoints: latency-svc-vkj7h [1.011640427s]
Aug 11 12:18:27.508: INFO: Created: latency-svc-zthwf
Aug 11 12:18:27.527: INFO: Got endpoints: latency-svc-zthwf [1.029439151s]
Aug 11 12:18:27.600: INFO: Created: latency-svc-dxzwq
Aug 11 12:18:27.630: INFO: Created: latency-svc-lccqw
Aug 11 12:18:27.630: INFO: Got endpoints: latency-svc-dxzwq [1.048225695s]
Aug 11 12:18:27.660: INFO: Got endpoints: latency-svc-lccqw [1.044779792s]
Aug 11 12:18:27.686: INFO: Created: latency-svc-g5vkv
Aug 11 12:18:27.743: INFO: Got endpoints: latency-svc-g5vkv [1.094513388s]
Aug 11 12:18:27.760: INFO: Created: latency-svc-clwtp
Aug 11 12:18:27.781: INFO: Got endpoints: latency-svc-clwtp [1.032427265s]
Aug 11 12:18:27.826: INFO: Created: latency-svc-dfhr7
Aug 11 12:18:27.834: INFO: Got endpoints: latency-svc-dfhr7 [1.016726068s]
Aug 11 12:18:27.911: INFO: Created: latency-svc-l7nsc
Aug 11 12:18:27.931: INFO: Got endpoints: latency-svc-l7nsc [1.030012687s]
Aug 11 12:18:27.954: INFO: Created: latency-svc-vsx4c
Aug 11 12:18:27.966: INFO: Got endpoints: latency-svc-vsx4c [973.880744ms]
Aug 11 12:18:28.031: INFO: Created: latency-svc-lbwt9
Aug 11 12:18:28.054: INFO: Got endpoints: latency-svc-lbwt9 [986.142465ms]
Aug 11 12:18:28.098: INFO: Created: latency-svc-xv7ft
Aug 11 12:18:28.112: INFO: Got endpoints: latency-svc-xv7ft [943.036679ms]
Aug 11 12:18:28.192: INFO: Created: latency-svc-b7zfp
Aug 11 12:18:28.234: INFO: Created: latency-svc-sr2f4
Aug 11 12:18:28.235: INFO: Got endpoints: latency-svc-b7zfp [1.005027044s]
Aug 11 12:18:28.270: INFO: Got endpoints: latency-svc-sr2f4 [969.791863ms]
Aug 11 12:18:28.366: INFO: Created: latency-svc-c2xq5
Aug 11 12:18:28.622: INFO: Got endpoints: latency-svc-c2xq5 [1.288627644s]
Aug 11 12:18:28.893: INFO: Created: latency-svc-22q2l
Aug 11 12:18:28.956: INFO: Created: latency-svc-kbxjh
Aug 11 12:18:28.956: INFO: Got endpoints: latency-svc-22q2l [1.580598836s]
Aug 11 12:18:29.038: INFO: Got endpoints: latency-svc-kbxjh [1.582872185s]
Aug 11 12:18:29.075: INFO: Created: latency-svc-w2rcr
Aug 11 12:18:29.106: INFO: Got endpoints: latency-svc-w2rcr [1.578404334s]
Aug 11 12:18:29.182: INFO: Created: latency-svc-r2mg7
Aug 11 12:18:29.190: INFO: Got endpoints: latency-svc-r2mg7 [1.559735913s]
Aug 11 12:18:29.213: INFO: Created: latency-svc-5dns5
Aug 11 12:18:29.238: INFO: Got endpoints: latency-svc-5dns5 [1.578410061s]
Aug 11 12:18:29.282: INFO: Created: latency-svc-gzlpp
Aug 11 12:18:29.366: INFO: Got endpoints: latency-svc-gzlpp [1.622416382s]
Aug 11 12:18:29.695: INFO: Created: latency-svc-4l92b
Aug 11 12:18:30.199: INFO: Got endpoints: latency-svc-4l92b [2.418366878s]
Aug 11 12:18:30.572: INFO: Created: latency-svc-7tsps
Aug 11 12:18:30.575: INFO: Got endpoints: latency-svc-7tsps [2.740616389s]
Aug 11 12:18:31.292: INFO: Created: latency-svc-k2zp6
Aug 11 12:18:31.409: INFO: Got endpoints: latency-svc-k2zp6 [3.478023077s]
Aug 11 12:18:31.576: INFO: Created: latency-svc-2tz89
Aug 11 12:18:31.580: INFO: Got endpoints: latency-svc-2tz89 [3.61375145s]
Aug 11 12:18:32.284: INFO: Created: latency-svc-gbl2z
Aug 11 12:18:32.474: INFO: Got endpoints: latency-svc-gbl2z [4.419794507s]
Aug 11 12:18:32.477: INFO: Created: latency-svc-d2vbp
Aug 11 12:18:32.490: INFO: Got endpoints: latency-svc-d2vbp [4.378691875s]
Aug 11 12:18:32.731: INFO: Created: latency-svc-njqfz
Aug 11 12:18:32.791: INFO: Got endpoints: latency-svc-njqfz [4.555670948s]
Aug 11 12:18:32.791: INFO: Created: latency-svc-n7ndp
Aug 11 12:18:32.887: INFO: Got endpoints: latency-svc-n7ndp [4.617294053s]
Aug 11 12:18:32.928: INFO: Created: latency-svc-lmkmd
Aug 11 12:18:32.962: INFO: Got endpoints: latency-svc-lmkmd [4.339648093s]
Aug 11 12:18:33.187: INFO: Created: latency-svc-5s2cj
Aug 11 12:18:33.191: INFO: Got endpoints: latency-svc-5s2cj [4.23482265s]
Aug 11 12:18:33.420: INFO: Created: latency-svc-v7wlz
Aug 11 12:18:33.582: INFO: Got endpoints: latency-svc-v7wlz [4.543364665s]
Aug 11 12:18:33.619: INFO: Created: latency-svc-rprvc
Aug 11 12:18:33.634: INFO: Got endpoints: latency-svc-rprvc [4.528325993s]
Aug 11 12:18:33.749: INFO: Created: latency-svc-xbtf9
Aug 11 12:18:33.819: INFO: Created: latency-svc-ktfp2
Aug 11 12:18:33.819: INFO: Got endpoints: latency-svc-xbtf9 [4.629866545s]
Aug 11 12:18:33.959: INFO: Got endpoints: latency-svc-ktfp2 [4.720674393s]
Aug 11 12:18:33.980: INFO: Created: latency-svc-26nn5
Aug 11 12:18:34.028: INFO: Got endpoints: latency-svc-26nn5 [4.662535956s]
Aug 11 12:18:34.126: INFO: Created: latency-svc-dhwk7
Aug 11 12:18:34.180: INFO: Got endpoints: latency-svc-dhwk7 [3.981039641s]
Aug 11 12:18:34.180: INFO: Created: latency-svc-b77qv
Aug 11 12:18:34.300: INFO: Got endpoints: latency-svc-b77qv [3.725221953s]
Aug 11 12:18:34.395: INFO: Created: latency-svc-vdc5r
Aug 11 12:18:34.528: INFO: Got endpoints: latency-svc-vdc5r [3.119203275s]
Aug 11 12:18:34.755: INFO: Created: latency-svc-qqbnt
Aug 11 12:18:34.784: INFO: Got endpoints: latency-svc-qqbnt [3.203704238s]
Aug 11 12:18:34.959: INFO: Created: latency-svc-dqlsl
Aug 11 12:18:34.977: INFO: Got endpoints: latency-svc-dqlsl [2.503056319s]
Aug 11 12:18:35.137: INFO: Created: latency-svc-cwzxx
Aug 11 12:18:35.181: INFO: Got endpoints: latency-svc-cwzxx [2.690768684s]
Aug 11 12:18:35.378: INFO: Created: latency-svc-4p9pf
Aug 11 12:18:35.433: INFO: Got endpoints: latency-svc-4p9pf [2.642463816s]
Aug 11 12:18:35.599: INFO: Created: latency-svc-sqmlz
Aug 11 12:18:35.643: INFO: Got endpoints: latency-svc-sqmlz [2.756317184s]
Aug 11 12:18:35.785: INFO: Created: latency-svc-4s7tc
Aug 11 12:18:35.793: INFO: Got endpoints: latency-svc-4s7tc [2.830803199s]
Aug 11 12:18:35.852: INFO: Created: latency-svc-87xdr
Aug 11 12:18:35.977: INFO: Got endpoints: latency-svc-87xdr [2.786079106s]
Aug 11 12:18:35.995: INFO: Created: latency-svc-8wwln
Aug 11 12:18:36.057: INFO: Got endpoints: latency-svc-8wwln [2.475691898s]
Aug 11 12:18:36.219: INFO: Created: latency-svc-x4674
Aug 11 12:18:36.402: INFO: Got endpoints: latency-svc-x4674 [2.767850267s]
Aug 11 12:18:36.452: INFO: Created: latency-svc-zbmhq
Aug 11 12:18:36.478: INFO: Got endpoints: latency-svc-zbmhq [2.658259246s]
Aug 11 12:18:37.150: INFO: Created: latency-svc-9qgsr
Aug 11 12:18:37.521: INFO: Got endpoints: latency-svc-9qgsr [3.562404659s]
Aug 11 12:18:37.527: INFO: Created: latency-svc-llxkv
Aug 11 12:18:37.755: INFO: Got endpoints: latency-svc-llxkv [3.726575774s]
Aug 11 12:18:37.828: INFO: Created: latency-svc-hwgpz
Aug 11 12:18:38.001: INFO: Got endpoints: latency-svc-hwgpz [3.82071917s]
Aug 11 12:18:38.584: INFO: Created: latency-svc-wkchv
Aug 11 12:18:38.869: INFO: Got endpoints: latency-svc-wkchv [4.569170455s]
Aug 11 12:18:39.121: INFO: Created: latency-svc-jxbfr
Aug 11 12:18:39.160: INFO: Got endpoints: latency-svc-jxbfr [4.63121308s]
Aug 11 12:18:39.378: INFO: Created: latency-svc-fv2lj
Aug 11 12:18:39.460: INFO: Got endpoints: latency-svc-fv2lj [4.676562502s]
Aug 11 12:18:39.725: INFO: Created: latency-svc-46lsn
Aug 11 12:18:39.766: INFO: Got endpoints: latency-svc-46lsn [4.788815061s]
Aug 11 12:18:40.527: INFO: Created: latency-svc-dmnww
Aug 11 12:18:40.923: INFO: Got endpoints: latency-svc-dmnww [5.741859583s]
Aug 11 12:18:40.931: INFO: Created: latency-svc-c27mz
Aug 11 12:18:40.953: INFO: Got endpoints: latency-svc-c27mz [5.52006707s]
Aug 11 12:18:41.222: INFO: Created: latency-svc-qlzvb
Aug 11 12:18:41.779: INFO: Got endpoints: latency-svc-qlzvb [6.135945771s]
Aug 11 12:18:42.082: INFO: Created: latency-svc-kngtx
Aug 11 12:18:42.252: INFO: Got endpoints: latency-svc-kngtx [6.458711574s]
Aug 11 12:18:42.504: INFO: Created: latency-svc-mr8th
Aug 11 12:18:42.538: INFO: Got endpoints: latency-svc-mr8th [6.561185991s]
Aug 11 12:18:42.749: INFO: Created: latency-svc-gkqln
Aug 11 12:18:42.817: INFO: Got endpoints: latency-svc-gkqln [6.759851835s]
Aug 11 12:18:43.033: INFO: Created: latency-svc-br4dd
Aug 11 12:18:43.087: INFO: Got endpoints: latency-svc-br4dd [6.684785049s]
Aug 11 12:18:43.307: INFO: Created: latency-svc-cz9gk
Aug 11 12:18:43.399: INFO: Got endpoints: latency-svc-cz9gk [6.92092689s]
Aug 11 12:18:43.596: INFO: Created: latency-svc-6m49p
Aug 11 12:18:43.621: INFO: Got endpoints: latency-svc-6m49p [6.099518673s]
Aug 11 12:18:43.839: INFO: Created: latency-svc-kgnnn
Aug 11 12:18:44.227: INFO: Got endpoints: latency-svc-kgnnn [6.471402727s]
Aug 11 12:18:44.523: INFO: Created: latency-svc-jbd7c
Aug 11 12:18:44.532: INFO: Got endpoints: latency-svc-jbd7c [6.531125922s]
Aug 11 12:18:45.271: INFO: Created: latency-svc-k8fp4
Aug 11 12:18:45.280: INFO: Got endpoints: latency-svc-k8fp4 [6.410753123s]
Aug 11 12:18:45.510: INFO: Created: latency-svc-mjx72
Aug 11 12:18:45.731: INFO: Got endpoints: latency-svc-mjx72 [6.571656445s]
Aug 11 12:18:45.759: INFO: Created: latency-svc-pff27
Aug 11 12:18:45.936: INFO: Got endpoints: latency-svc-pff27 [6.475290928s]
Aug 11 12:18:46.181: INFO: Created: latency-svc-4vrgw
Aug 11 12:18:46.230: INFO: Got endpoints: latency-svc-4vrgw [6.464277617s]
Aug 11 12:18:46.422: INFO: Created: latency-svc-jw9jf
Aug 11 12:18:46.502: INFO: Got endpoints: latency-svc-jw9jf [5.578706633s]
Aug 11 12:18:46.641: INFO: Created: latency-svc-xc8nj
Aug 11 12:18:46.664: INFO: Got endpoints: latency-svc-xc8nj [5.710733965s]
Aug 11 12:18:47.068: INFO: Created: latency-svc-975rb
Aug 11 12:18:47.325: INFO: Got endpoints: latency-svc-975rb [5.545509104s]
Aug 11 12:18:48.423: INFO: Created: latency-svc-rw2x9
Aug 11 12:18:48.965: INFO: Got endpoints: latency-svc-rw2x9 [6.713644679s]
Aug 11 12:18:49.674: INFO: Created: latency-svc-dnb8p
Aug 11 12:18:49.959: INFO: Got endpoints: latency-svc-dnb8p [7.420639181s]
Aug 11 12:18:50.756: INFO: Created: latency-svc-5hfwj
Aug 11 12:18:51.193: INFO: Got endpoints: latency-svc-5hfwj [8.375811794s]
Aug 11 12:18:52.348: INFO: Created: latency-svc-wv2fr
Aug 11 12:18:52.416: INFO: Got endpoints: latency-svc-wv2fr [9.32846341s]
Aug 11 12:18:53.722: INFO: Created: latency-svc-v9gn7
Aug 11 12:18:54.123: INFO: Got endpoints: latency-svc-v9gn7 [10.724330046s]
Aug 11 12:18:54.402: INFO: Created: latency-svc-pkvpm
Aug 11 12:18:54.770: INFO: Got endpoints: latency-svc-pkvpm [11.149476581s]
Aug 11 12:18:55.037: INFO: Created: latency-svc-wh8gk
Aug 11 12:18:55.469: INFO: Got endpoints: latency-svc-wh8gk [11.242259193s]
Aug 11 12:18:56.193: INFO: Created: latency-svc-9g5vf
Aug 11 12:18:56.654: INFO: Got endpoints: latency-svc-9g5vf [12.122028475s]
Aug 11 12:18:56.966: INFO: Created: latency-svc-nhmjc
Aug 11 12:18:57.339: INFO: Got endpoints: latency-svc-nhmjc [12.058956951s]
Aug 11 12:18:58.853: INFO: Created: latency-svc-bnpgk
Aug 11 12:19:00.132: INFO: Got endpoints: latency-svc-bnpgk [14.400896473s]
Aug 11 12:19:00.766: INFO: Created: latency-svc-jtkj2
Aug 11 12:19:01.265: INFO: Got endpoints: latency-svc-jtkj2 [15.329571013s]
Aug 11 12:19:01.271: INFO: Created: latency-svc-9jk6w
Aug 11 12:19:01.499: INFO: Got endpoints: latency-svc-9jk6w [15.268689541s]
Aug 11 12:19:02.094: INFO: Created: latency-svc-grbgh
Aug 11 12:19:02.159: INFO: Got endpoints: latency-svc-grbgh [15.657379411s]
Aug 11 12:19:03.285: INFO: Created: latency-svc-nflbm
Aug 11 12:19:03.493: INFO: Got endpoints: latency-svc-nflbm [16.828362657s]
Aug 11 12:19:04.354: INFO: Created: latency-svc-bncl2
Aug 11 12:19:05.164: INFO: Got endpoints: latency-svc-bncl2 [17.839412975s]
Aug 11 12:19:05.891: INFO: Created: latency-svc-dmjdv
Aug 11 12:19:05.976: INFO: Got endpoints: latency-svc-dmjdv [17.010545345s]
Aug 11 12:19:06.151: INFO: Created: latency-svc-8g5pb
Aug 11 12:19:06.234: INFO: Got endpoints: latency-svc-8g5pb [16.274548315s]
Aug 11 12:19:06.349: INFO: Created: latency-svc-ds8x2
Aug 11 12:19:06.438: INFO: Got endpoints: latency-svc-ds8x2 [15.244935077s]
Aug 11 12:19:06.659: INFO: Created: latency-svc-8rdk5
Aug 11 12:19:06.696: INFO: Got endpoints: latency-svc-8rdk5 [14.280594147s]
Aug 11 12:19:06.924: INFO: Created: latency-svc-7t2xn
Aug 11 12:19:06.954: INFO: Got endpoints: latency-svc-7t2xn [12.830887148s]
Aug 11 12:19:07.133: INFO: Created: latency-svc-hl5tw
Aug 11 12:19:07.136: INFO: Got endpoints: latency-svc-hl5tw [12.36535902s]
Aug 11 12:19:07.308: INFO: Created: latency-svc-992s8
Aug 11 12:19:07.344: INFO: Got endpoints: latency-svc-992s8 [11.875466444s]
Aug 11 12:19:07.345: INFO: Latencies: [118.429984ms 166.922175ms 249.597803ms 278.073082ms 343.116949ms 439.329968ms 460.189102ms 569.050174ms 656.270956ms 738.876443ms 910.321061ms 932.324522ms 943.036679ms 969.791863ms 972.200441ms 973.880744ms 986.09186ms 986.142465ms 992.37146ms 992.389955ms 1.005027044s 1.006661397s 1.010529576s 1.011155313s 1.011640427s 1.016726068s 1.021033518s 1.021992323s 1.025793862s 1.026356847s 1.029439151s 1.030012687s 1.030934883s 1.032427265s 1.033180359s 1.03802896s 1.038146643s 1.041562515s 1.043621955s 1.044779792s 1.048225695s 1.049701752s 1.0525913s 1.072362197s 1.073554467s 1.08232954s 1.082698666s 1.085844559s 1.094513388s 1.095336861s 1.09945346s 1.179918494s 1.181944943s 1.216056689s 1.226409806s 1.241664869s 1.257071508s 1.288627644s 1.30400986s 1.331707907s 1.348592549s 1.360040209s 1.366737221s 1.367121251s 1.376662966s 1.379805917s 1.392848363s 1.398076921s 1.403478159s 1.418272917s 1.423826688s 1.430244899s 1.435704824s 1.448404948s 1.453642465s 1.471732112s 1.472200729s 1.528597028s 1.546148521s 1.559735913s 1.571562959s 1.578404334s 1.578410061s 1.579021004s 1.580598836s 1.582872185s 1.59247908s 1.598669949s 1.603404704s 1.609910143s 1.613226264s 1.614226036s 1.622416382s 1.634166777s 1.635031554s 1.636412265s 1.64738842s 1.652209179s 1.680136594s 1.75238197s 1.792801908s 1.794743834s 1.812819392s 1.813630273s 1.846390518s 1.854048263s 1.869191792s 1.872304152s 1.876071138s 1.878663434s 1.882272734s 1.900811968s 1.904131635s 1.909912173s 1.925868106s 1.938688373s 1.952822987s 1.966226305s 2.003038351s 2.032074606s 2.041319071s 2.070842303s 2.071376403s 2.09686773s 2.140341061s 2.418366878s 2.475691898s 2.503056319s 2.642463816s 2.658259246s 2.690768684s 2.740616389s 2.756317184s 2.767850267s 2.786079106s 2.830803199s 3.119203275s 3.203704238s 3.478023077s 3.562404659s 3.61375145s 3.725221953s 3.726575774s 3.82071917s 3.981039641s 4.23482265s 4.339648093s 4.378691875s 4.419794507s 4.528325993s 4.543364665s 4.555670948s 4.569170455s 4.617294053s 4.629866545s 4.63121308s 4.662535956s 4.676562502s 4.720674393s 4.788815061s 5.52006707s 5.545509104s 5.578706633s 5.710733965s 5.741859583s 6.099518673s 6.135945771s 6.410753123s 6.458711574s 6.464277617s 6.471402727s 6.475290928s 6.531125922s 6.561185991s 6.571656445s 6.684785049s 6.713644679s 6.759851835s 6.92092689s 7.420639181s 8.375811794s 9.32846341s 10.724330046s 11.149476581s 11.242259193s 11.875466444s 12.058956951s 12.122028475s 12.36535902s 12.830887148s 14.280594147s 14.400896473s 15.244935077s 15.268689541s 15.329571013s 15.657379411s 16.274548315s 16.828362657s 17.010545345s 17.839412975s]
Aug 11 12:19:07.345: INFO: 50 %ile: 1.792801908s
Aug 11 12:19:07.345: INFO: 90 %ile: 8.375811794s
Aug 11 12:19:07.345: INFO: 99 %ile: 17.010545345s
Aug 11 12:19:07.345: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:19:07.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-4098" for this suite.

• [SLOW TEST:54.713 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":275,"completed":86,"skipped":1233,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:19:07.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-0de5bf05-1616-4247-9aa9-49ff587295b8
STEP: Creating a pod to test consume secrets
Aug 11 12:19:09.113: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-36fd7545-15d9-477d-9736-1ed802efa3f0" in namespace "projected-8225" to be "Succeeded or Failed"
Aug 11 12:19:09.295: INFO: Pod "pod-projected-secrets-36fd7545-15d9-477d-9736-1ed802efa3f0": Phase="Pending", Reason="", readiness=false. Elapsed: 181.148198ms
Aug 11 12:19:11.302: INFO: Pod "pod-projected-secrets-36fd7545-15d9-477d-9736-1ed802efa3f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.188032371s
Aug 11 12:19:13.411: INFO: Pod "pod-projected-secrets-36fd7545-15d9-477d-9736-1ed802efa3f0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.297803243s
Aug 11 12:19:15.618: INFO: Pod "pod-projected-secrets-36fd7545-15d9-477d-9736-1ed802efa3f0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.504371851s
Aug 11 12:19:17.768: INFO: Pod "pod-projected-secrets-36fd7545-15d9-477d-9736-1ed802efa3f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.654019021s
STEP: Saw pod success
Aug 11 12:19:17.768: INFO: Pod "pod-projected-secrets-36fd7545-15d9-477d-9736-1ed802efa3f0" satisfied condition "Succeeded or Failed"
Aug 11 12:19:18.020: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-36fd7545-15d9-477d-9736-1ed802efa3f0 container projected-secret-volume-test: 
STEP: delete the pod
Aug 11 12:19:18.281: INFO: Waiting for pod pod-projected-secrets-36fd7545-15d9-477d-9736-1ed802efa3f0 to disappear
Aug 11 12:19:18.350: INFO: Pod pod-projected-secrets-36fd7545-15d9-477d-9736-1ed802efa3f0 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:19:18.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8225" for this suite.

• [SLOW TEST:10.952 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":87,"skipped":1247,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:19:18.523: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 11 12:19:25.853: INFO: Successfully updated pod "pod-update-7fd9caf0-e99d-4490-85cd-20e2b6b18aa2"
STEP: verifying the updated pod is in kubernetes
Aug 11 12:19:25.925: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:19:25.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3431" for this suite.

• [SLOW TEST:7.434 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":88,"skipped":1300,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:19:25.958: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Aug 11 12:19:26.240: INFO: Waiting up to 5m0s for pod "downward-api-2c4e009c-6bbf-4c11-90b5-e351d46565e8" in namespace "downward-api-5959" to be "Succeeded or Failed"
Aug 11 12:19:26.298: INFO: Pod "downward-api-2c4e009c-6bbf-4c11-90b5-e351d46565e8": Phase="Pending", Reason="", readiness=false. Elapsed: 57.560152ms
Aug 11 12:19:28.421: INFO: Pod "downward-api-2c4e009c-6bbf-4c11-90b5-e351d46565e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.181106752s
Aug 11 12:19:30.619: INFO: Pod "downward-api-2c4e009c-6bbf-4c11-90b5-e351d46565e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.37874746s
STEP: Saw pod success
Aug 11 12:19:30.619: INFO: Pod "downward-api-2c4e009c-6bbf-4c11-90b5-e351d46565e8" satisfied condition "Succeeded or Failed"
Aug 11 12:19:30.629: INFO: Trying to get logs from node kali-worker pod downward-api-2c4e009c-6bbf-4c11-90b5-e351d46565e8 container dapi-container: 
STEP: delete the pod
Aug 11 12:19:30.707: INFO: Waiting for pod downward-api-2c4e009c-6bbf-4c11-90b5-e351d46565e8 to disappear
Aug 11 12:19:30.713: INFO: Pod downward-api-2c4e009c-6bbf-4c11-90b5-e351d46565e8 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:19:30.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5959" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":89,"skipped":1354,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:19:30.822: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod test-webserver-a19beeab-cc3f-4306-8f07-58379428c0fd in namespace container-probe-8476
Aug 11 12:19:37.109: INFO: Started pod test-webserver-a19beeab-cc3f-4306-8f07-58379428c0fd in namespace container-probe-8476
STEP: checking the pod's current state and verifying that restartCount is present
Aug 11 12:19:37.182: INFO: Initial restart count of pod test-webserver-a19beeab-cc3f-4306-8f07-58379428c0fd is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:23:38.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8476" for this suite.

• [SLOW TEST:247.836 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":90,"skipped":1395,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:23:38.658: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting the proxy server
Aug 11 12:23:39.196: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:23:39.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4850" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":275,"completed":91,"skipped":1396,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:23:39.290: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-4926
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-4926
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4926
Aug 11 12:23:39.900: INFO: Found 0 stateful pods, waiting for 1
Aug 11 12:23:49.905: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Aug 11 12:23:49.908: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4926 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 11 12:23:50.233: INFO: stderr: "I0811 12:23:50.121032     742 log.go:172] (0xc000586420) (0xc00051eb40) Create stream\nI0811 12:23:50.121131     742 log.go:172] (0xc000586420) (0xc00051eb40) Stream added, broadcasting: 1\nI0811 12:23:50.124836     742 log.go:172] (0xc000586420) Reply frame received for 1\nI0811 12:23:50.124885     742 log.go:172] (0xc000586420) (0xc00092c000) Create stream\nI0811 12:23:50.124897     742 log.go:172] (0xc000586420) (0xc00092c000) Stream added, broadcasting: 3\nI0811 12:23:50.126011     742 log.go:172] (0xc000586420) Reply frame received for 3\nI0811 12:23:50.126090     742 log.go:172] (0xc000586420) (0xc0008ea000) Create stream\nI0811 12:23:50.126116     742 log.go:172] (0xc000586420) (0xc0008ea000) Stream added, broadcasting: 5\nI0811 12:23:50.126969     742 log.go:172] (0xc000586420) Reply frame received for 5\nI0811 12:23:50.189793     742 log.go:172] (0xc000586420) Data frame received for 5\nI0811 12:23:50.189816     742 log.go:172] (0xc0008ea000) (5) Data frame handling\nI0811 12:23:50.189828     742 log.go:172] (0xc0008ea000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0811 12:23:50.222993     742 log.go:172] (0xc000586420) Data frame received for 3\nI0811 12:23:50.223018     742 log.go:172] (0xc00092c000) (3) Data frame handling\nI0811 12:23:50.223026     742 log.go:172] (0xc00092c000) (3) Data frame sent\nI0811 12:23:50.223262     742 log.go:172] (0xc000586420) Data frame received for 5\nI0811 12:23:50.223287     742 log.go:172] (0xc0008ea000) (5) Data frame handling\nI0811 12:23:50.223511     742 log.go:172] (0xc000586420) Data frame received for 3\nI0811 12:23:50.223539     742 log.go:172] (0xc00092c000) (3) Data frame handling\nI0811 12:23:50.225223     742 log.go:172] (0xc000586420) Data frame received for 1\nI0811 12:23:50.225260     742 log.go:172] (0xc00051eb40) (1) Data frame handling\nI0811 12:23:50.225300     742 log.go:172] (0xc00051eb40) (1) Data frame sent\nI0811 12:23:50.225333     742 log.go:172] (0xc000586420) (0xc00051eb40) Stream removed, broadcasting: 1\nI0811 12:23:50.225417     742 log.go:172] (0xc000586420) Go away received\nI0811 12:23:50.225893     742 log.go:172] (0xc000586420) (0xc00051eb40) Stream removed, broadcasting: 1\nI0811 12:23:50.225928     742 log.go:172] (0xc000586420) (0xc00092c000) Stream removed, broadcasting: 3\nI0811 12:23:50.225952     742 log.go:172] (0xc000586420) (0xc0008ea000) Stream removed, broadcasting: 5\n"
Aug 11 12:23:50.233: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 11 12:23:50.233: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 11 12:23:50.255: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 11 12:24:00.259: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 11 12:24:00.259: INFO: Waiting for statefulset status.replicas updated to 0
Aug 11 12:24:00.280: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999421s
Aug 11 12:24:01.285: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.98920991s
Aug 11 12:24:02.288: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.984443231s
Aug 11 12:24:03.362: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.981039266s
Aug 11 12:24:04.366: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.906457416s
Aug 11 12:24:05.411: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.90268777s
Aug 11 12:24:06.435: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.857858781s
Aug 11 12:24:07.473: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.833625857s
Aug 11 12:24:08.561: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.796188517s
Aug 11 12:24:09.565: INFO: Verifying statefulset ss doesn't scale past 1 for another 708.128683ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4926
Aug 11 12:24:10.570: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4926 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 11 12:24:10.792: INFO: stderr: "I0811 12:24:10.711528     762 log.go:172] (0xc000a32000) (0xc00096c000) Create stream\nI0811 12:24:10.711615     762 log.go:172] (0xc000a32000) (0xc00096c000) Stream added, broadcasting: 1\nI0811 12:24:10.714833     762 log.go:172] (0xc000a32000) Reply frame received for 1\nI0811 12:24:10.714894     762 log.go:172] (0xc000a32000) (0xc000a68000) Create stream\nI0811 12:24:10.714921     762 log.go:172] (0xc000a32000) (0xc000a68000) Stream added, broadcasting: 3\nI0811 12:24:10.715935     762 log.go:172] (0xc000a32000) Reply frame received for 3\nI0811 12:24:10.715974     762 log.go:172] (0xc000a32000) (0xc000a680a0) Create stream\nI0811 12:24:10.715986     762 log.go:172] (0xc000a32000) (0xc000a680a0) Stream added, broadcasting: 5\nI0811 12:24:10.717205     762 log.go:172] (0xc000a32000) Reply frame received for 5\nI0811 12:24:10.783789     762 log.go:172] (0xc000a32000) Data frame received for 3\nI0811 12:24:10.783822     762 log.go:172] (0xc000a68000) (3) Data frame handling\nI0811 12:24:10.783841     762 log.go:172] (0xc000a68000) (3) Data frame sent\nI0811 12:24:10.783849     762 log.go:172] (0xc000a32000) Data frame received for 3\nI0811 12:24:10.783855     762 log.go:172] (0xc000a68000) (3) Data frame handling\nI0811 12:24:10.783981     762 log.go:172] (0xc000a32000) Data frame received for 5\nI0811 12:24:10.784000     762 log.go:172] (0xc000a680a0) (5) Data frame handling\nI0811 12:24:10.784016     762 log.go:172] (0xc000a680a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0811 12:24:10.784031     762 log.go:172] (0xc000a32000) Data frame received for 5\nI0811 12:24:10.784109     762 log.go:172] (0xc000a680a0) (5) Data frame handling\nI0811 12:24:10.785882     762 log.go:172] (0xc000a32000) Data frame received for 1\nI0811 12:24:10.785897     762 log.go:172] (0xc00096c000) (1) Data frame handling\nI0811 12:24:10.785911     762 log.go:172] (0xc00096c000) (1) Data frame sent\nI0811 12:24:10.785921     762 log.go:172] (0xc000a32000) (0xc00096c000) Stream removed, broadcasting: 1\nI0811 12:24:10.785982     762 log.go:172] (0xc000a32000) Go away received\nI0811 12:24:10.786227     762 log.go:172] (0xc000a32000) (0xc00096c000) Stream removed, broadcasting: 1\nI0811 12:24:10.786248     762 log.go:172] (0xc000a32000) (0xc000a68000) Stream removed, broadcasting: 3\nI0811 12:24:10.786260     762 log.go:172] (0xc000a32000) (0xc000a680a0) Stream removed, broadcasting: 5\n"
Aug 11 12:24:10.792: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 11 12:24:10.792: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 11 12:24:10.796: INFO: Found 1 stateful pods, waiting for 3
Aug 11 12:24:20.801: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 11 12:24:20.801: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 11 12:24:20.801: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 11 12:24:30.801: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 11 12:24:30.801: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 11 12:24:30.801: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Aug 11 12:24:30.808: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4926 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 11 12:24:31.026: INFO: stderr: "I0811 12:24:30.951763     782 log.go:172] (0xc0005d02c0) (0xc00047cb40) Create stream\nI0811 12:24:30.951836     782 log.go:172] (0xc0005d02c0) (0xc00047cb40) Stream added, broadcasting: 1\nI0811 12:24:30.954563     782 log.go:172] (0xc0005d02c0) Reply frame received for 1\nI0811 12:24:30.954603     782 log.go:172] (0xc0005d02c0) (0xc0007ae0a0) Create stream\nI0811 12:24:30.954619     782 log.go:172] (0xc0005d02c0) (0xc0007ae0a0) Stream added, broadcasting: 3\nI0811 12:24:30.955637     782 log.go:172] (0xc0005d02c0) Reply frame received for 3\nI0811 12:24:30.955671     782 log.go:172] (0xc0005d02c0) (0xc000a4a000) Create stream\nI0811 12:24:30.955680     782 log.go:172] (0xc0005d02c0) (0xc000a4a000) Stream added, broadcasting: 5\nI0811 12:24:30.956654     782 log.go:172] (0xc0005d02c0) Reply frame received for 5\nI0811 12:24:31.018300     782 log.go:172] (0xc0005d02c0) Data frame received for 3\nI0811 12:24:31.018333     782 log.go:172] (0xc0007ae0a0) (3) Data frame handling\nI0811 12:24:31.018347     782 log.go:172] (0xc0007ae0a0) (3) Data frame sent\nI0811 12:24:31.018355     782 log.go:172] (0xc0005d02c0) Data frame received for 3\nI0811 12:24:31.018362     782 log.go:172] (0xc0007ae0a0) (3) Data frame handling\nI0811 12:24:31.018390     782 log.go:172] (0xc0005d02c0) Data frame received for 5\nI0811 12:24:31.018399     782 log.go:172] (0xc000a4a000) (5) Data frame handling\nI0811 12:24:31.018418     782 log.go:172] (0xc000a4a000) (5) Data frame sent\nI0811 12:24:31.018428     782 log.go:172] (0xc0005d02c0) Data frame received for 5\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0811 12:24:31.018436     782 log.go:172] (0xc000a4a000) (5) Data frame handling\nI0811 12:24:31.019674     782 log.go:172] (0xc0005d02c0) Data frame received for 1\nI0811 12:24:31.019699     782 log.go:172] (0xc00047cb40) (1) Data frame handling\nI0811 12:24:31.019723     782 log.go:172] (0xc00047cb40) (1) Data frame sent\nI0811 12:24:31.019742     782 log.go:172] (0xc0005d02c0) (0xc00047cb40) Stream removed, broadcasting: 1\nI0811 12:24:31.019757     782 log.go:172] (0xc0005d02c0) Go away received\nI0811 12:24:31.020098     782 log.go:172] (0xc0005d02c0) (0xc00047cb40) Stream removed, broadcasting: 1\nI0811 12:24:31.020115     782 log.go:172] (0xc0005d02c0) (0xc0007ae0a0) Stream removed, broadcasting: 3\nI0811 12:24:31.020124     782 log.go:172] (0xc0005d02c0) (0xc000a4a000) Stream removed, broadcasting: 5\n"
Aug 11 12:24:31.026: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 11 12:24:31.026: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 11 12:24:31.026: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4926 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 11 12:24:31.261: INFO: stderr: "I0811 12:24:31.158479     803 log.go:172] (0xc000a438c0) (0xc000ab4960) Create stream\nI0811 12:24:31.158561     803 log.go:172] (0xc000a438c0) (0xc000ab4960) Stream added, broadcasting: 1\nI0811 12:24:31.163784     803 log.go:172] (0xc000a438c0) Reply frame received for 1\nI0811 12:24:31.163827     803 log.go:172] (0xc000a438c0) (0xc0005db680) Create stream\nI0811 12:24:31.163842     803 log.go:172] (0xc000a438c0) (0xc0005db680) Stream added, broadcasting: 3\nI0811 12:24:31.164859     803 log.go:172] (0xc000a438c0) Reply frame received for 3\nI0811 12:24:31.164910     803 log.go:172] (0xc000a438c0) (0xc000ab4000) Create stream\nI0811 12:24:31.164930     803 log.go:172] (0xc000a438c0) (0xc000ab4000) Stream added, broadcasting: 5\nI0811 12:24:31.166102     803 log.go:172] (0xc000a438c0) Reply frame received for 5\nI0811 12:24:31.225144     803 log.go:172] (0xc000a438c0) Data frame received for 5\nI0811 12:24:31.225165     803 log.go:172] (0xc000ab4000) (5) Data frame handling\nI0811 12:24:31.225177     803 log.go:172] (0xc000ab4000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0811 12:24:31.250842     803 log.go:172] (0xc000a438c0) Data frame received for 3\nI0811 12:24:31.250879     803 log.go:172] (0xc0005db680) (3) Data frame handling\nI0811 12:24:31.250896     803 log.go:172] (0xc0005db680) (3) Data frame sent\nI0811 12:24:31.251811     803 log.go:172] (0xc000a438c0) Data frame received for 3\nI0811 12:24:31.251918     803 log.go:172] (0xc0005db680) (3) Data frame handling\nI0811 12:24:31.252051     803 log.go:172] (0xc000a438c0) Data frame received for 5\nI0811 12:24:31.252114     803 log.go:172] (0xc000ab4000) (5) Data frame handling\nI0811 12:24:31.253668     803 log.go:172] (0xc000a438c0) Data frame received for 1\nI0811 12:24:31.253695     803 log.go:172] (0xc000ab4960) (1) Data frame handling\nI0811 12:24:31.253717     803 log.go:172] (0xc000ab4960) (1) Data frame sent\nI0811 12:24:31.253788     803 log.go:172] (0xc000a438c0) (0xc000ab4960) Stream removed, broadcasting: 1\nI0811 12:24:31.253952     803 log.go:172] (0xc000a438c0) Go away received\nI0811 12:24:31.254217     803 log.go:172] (0xc000a438c0) (0xc000ab4960) Stream removed, broadcasting: 1\nI0811 12:24:31.254249     803 log.go:172] (0xc000a438c0) (0xc0005db680) Stream removed, broadcasting: 3\nI0811 12:24:31.254264     803 log.go:172] (0xc000a438c0) (0xc000ab4000) Stream removed, broadcasting: 5\n"
Aug 11 12:24:31.261: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 11 12:24:31.261: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 11 12:24:31.261: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4926 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 11 12:24:31.579: INFO: stderr: "I0811 12:24:31.445336     823 log.go:172] (0xc000aa60b0) (0xc0005b5680) Create stream\nI0811 12:24:31.445432     823 log.go:172] (0xc000aa60b0) (0xc0005b5680) Stream added, broadcasting: 1\nI0811 12:24:31.448459     823 log.go:172] (0xc000aa60b0) Reply frame received for 1\nI0811 12:24:31.448512     823 log.go:172] (0xc000aa60b0) (0xc000932000) Create stream\nI0811 12:24:31.448545     823 log.go:172] (0xc000aa60b0) (0xc000932000) Stream added, broadcasting: 3\nI0811 12:24:31.449526     823 log.go:172] (0xc000aa60b0) Reply frame received for 3\nI0811 12:24:31.449557     823 log.go:172] (0xc000aa60b0) (0xc0009320a0) Create stream\nI0811 12:24:31.449573     823 log.go:172] (0xc000aa60b0) (0xc0009320a0) Stream added, broadcasting: 5\nI0811 12:24:31.450383     823 log.go:172] (0xc000aa60b0) Reply frame received for 5\nI0811 12:24:31.529740     823 log.go:172] (0xc000aa60b0) Data frame received for 5\nI0811 12:24:31.529765     823 log.go:172] (0xc0009320a0) (5) Data frame handling\nI0811 12:24:31.529781     823 log.go:172] (0xc0009320a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0811 12:24:31.569193     823 log.go:172] (0xc000aa60b0) Data frame received for 3\nI0811 12:24:31.569234     823 log.go:172] (0xc000932000) (3) Data frame handling\nI0811 12:24:31.569271     823 log.go:172] (0xc000932000) (3) Data frame sent\nI0811 12:24:31.569293     823 log.go:172] (0xc000aa60b0) Data frame received for 3\nI0811 12:24:31.569310     823 log.go:172] (0xc000932000) (3) Data frame handling\nI0811 12:24:31.569365     823 log.go:172] (0xc000aa60b0) Data frame received for 5\nI0811 12:24:31.569381     823 log.go:172] (0xc0009320a0) (5) Data frame handling\nI0811 12:24:31.571117     823 log.go:172] (0xc000aa60b0) Data frame received for 1\nI0811 12:24:31.571138     823 log.go:172] (0xc0005b5680) (1) Data frame handling\nI0811 12:24:31.571156     823 log.go:172] (0xc0005b5680) (1) Data frame sent\nI0811 12:24:31.571323     823 log.go:172] (0xc000aa60b0) (0xc0005b5680) Stream removed, broadcasting: 1\nI0811 12:24:31.571354     823 log.go:172] (0xc000aa60b0) Go away received\nI0811 12:24:31.571877     823 log.go:172] (0xc000aa60b0) (0xc0005b5680) Stream removed, broadcasting: 1\nI0811 12:24:31.571912     823 log.go:172] (0xc000aa60b0) (0xc000932000) Stream removed, broadcasting: 3\nI0811 12:24:31.571933     823 log.go:172] (0xc000aa60b0) (0xc0009320a0) Stream removed, broadcasting: 5\n"
Aug 11 12:24:31.579: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 11 12:24:31.579: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 11 12:24:31.579: INFO: Waiting for statefulset status.replicas updated to 0
Aug 11 12:24:31.582: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Aug 11 12:24:41.671: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 11 12:24:41.671: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 11 12:24:41.671: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 11 12:24:42.218: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999998968s
Aug 11 12:24:43.387: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.525665777s
Aug 11 12:24:44.418: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.357354525s
Aug 11 12:24:45.447: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.325555196s
Aug 11 12:24:46.489: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.296850165s
Aug 11 12:24:47.494: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.255393399s
Aug 11 12:24:48.498: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.250138548s
Aug 11 12:24:49.504: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.245926197s
Aug 11 12:24:50.509: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.240088888s
Aug 11 12:24:51.531: INFO: Verifying statefulset ss doesn't scale past 3 for another 235.369184ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4926
Aug 11 12:24:52.639: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4926 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 11 12:24:52.850: INFO: stderr: "I0811 12:24:52.767734     842 log.go:172] (0xc000978790) (0xc0009461e0) Create stream\nI0811 12:24:52.767788     842 log.go:172] (0xc000978790) (0xc0009461e0) Stream added, broadcasting: 1\nI0811 12:24:52.770326     842 log.go:172] (0xc000978790) Reply frame received for 1\nI0811 12:24:52.770389     842 log.go:172] (0xc000978790) (0xc00051d860) Create stream\nI0811 12:24:52.770410     842 log.go:172] (0xc000978790) (0xc00051d860) Stream added, broadcasting: 3\nI0811 12:24:52.771438     842 log.go:172] (0xc000978790) Reply frame received for 3\nI0811 12:24:52.771462     842 log.go:172] (0xc000978790) (0xc0006812c0) Create stream\nI0811 12:24:52.771477     842 log.go:172] (0xc000978790) (0xc0006812c0) Stream added, broadcasting: 5\nI0811 12:24:52.772297     842 log.go:172] (0xc000978790) Reply frame received for 5\nI0811 12:24:52.843693     842 log.go:172] (0xc000978790) Data frame received for 5\nI0811 12:24:52.843728     842 log.go:172] (0xc0006812c0) (5) Data frame handling\nI0811 12:24:52.843742     842 log.go:172] (0xc0006812c0) (5) Data frame sent\nI0811 12:24:52.843753     842 log.go:172] (0xc000978790) Data frame received for 5\nI0811 12:24:52.843760     842 log.go:172] (0xc0006812c0) (5) Data frame handling\nI0811 12:24:52.843772     842 log.go:172] (0xc000978790) Data frame received for 3\nI0811 12:24:52.843779     842 log.go:172] (0xc00051d860) (3) Data frame handling\nI0811 12:24:52.843788     842 log.go:172] (0xc00051d860) (3) Data frame sent\nI0811 12:24:52.843796     842 log.go:172] (0xc000978790) Data frame received for 3\nI0811 12:24:52.843803     842 log.go:172] (0xc00051d860) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0811 12:24:52.844862     842 log.go:172] (0xc000978790) Data frame received for 1\nI0811 12:24:52.844885     842 log.go:172] (0xc0009461e0) (1) Data frame handling\nI0811 12:24:52.844904     842 log.go:172] (0xc0009461e0) (1) Data frame sent\nI0811 12:24:52.844924     842 log.go:172] (0xc000978790) (0xc0009461e0) Stream removed, broadcasting: 1\nI0811 12:24:52.844941     842 log.go:172] (0xc000978790) Go away received\nI0811 12:24:52.845427     842 log.go:172] (0xc000978790) (0xc0009461e0) Stream removed, broadcasting: 1\nI0811 12:24:52.845447     842 log.go:172] (0xc000978790) (0xc00051d860) Stream removed, broadcasting: 3\nI0811 12:24:52.845460     842 log.go:172] (0xc000978790) (0xc0006812c0) Stream removed, broadcasting: 5\n"
Aug 11 12:24:52.850: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 11 12:24:52.850: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 11 12:24:52.850: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4926 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 11 12:24:53.047: INFO: stderr: "I0811 12:24:52.984094     863 log.go:172] (0xc000af0000) (0xc000226280) Create stream\nI0811 12:24:52.984149     863 log.go:172] (0xc000af0000) (0xc000226280) Stream added, broadcasting: 1\nI0811 12:24:52.986198     863 log.go:172] (0xc000af0000) Reply frame received for 1\nI0811 12:24:52.986240     863 log.go:172] (0xc000af0000) (0xc0008fac80) Create stream\nI0811 12:24:52.986265     863 log.go:172] (0xc000af0000) (0xc0008fac80) Stream added, broadcasting: 3\nI0811 12:24:52.987008     863 log.go:172] (0xc000af0000) Reply frame received for 3\nI0811 12:24:52.987036     863 log.go:172] (0xc000af0000) (0xc0007c8000) Create stream\nI0811 12:24:52.987046     863 log.go:172] (0xc000af0000) (0xc0007c8000) Stream added, broadcasting: 5\nI0811 12:24:52.987904     863 log.go:172] (0xc000af0000) Reply frame received for 5\nI0811 12:24:53.039797     863 log.go:172] (0xc000af0000) Data frame received for 3\nI0811 12:24:53.039823     863 log.go:172] (0xc0008fac80) (3) Data frame handling\nI0811 12:24:53.039833     863 log.go:172] (0xc0008fac80) (3) Data frame sent\nI0811 12:24:53.039841     863 log.go:172] (0xc000af0000) Data frame received for 3\nI0811 12:24:53.039850     863 log.go:172] (0xc0008fac80) (3) Data frame handling\nI0811 12:24:53.039935     863 log.go:172] (0xc000af0000) Data frame received for 5\nI0811 12:24:53.039957     863 log.go:172] (0xc0007c8000) (5) Data frame handling\nI0811 12:24:53.039967     863 log.go:172] (0xc0007c8000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0811 12:24:53.040170     863 log.go:172] (0xc000af0000) Data frame received for 5\nI0811 12:24:53.040229     863 log.go:172] (0xc0007c8000) (5) Data frame handling\nI0811 12:24:53.041160     863 log.go:172] (0xc000af0000) Data frame received for 1\nI0811 12:24:53.041175     863 log.go:172] (0xc000226280) (1) Data frame handling\nI0811 12:24:53.041190     863 log.go:172] (0xc000226280) (1) Data frame sent\nI0811 12:24:53.041207     863 log.go:172] (0xc000af0000) (0xc000226280) Stream removed, broadcasting: 1\nI0811 12:24:53.041278     863 log.go:172] (0xc000af0000) Go away received\nI0811 12:24:53.041461     863 log.go:172] (0xc000af0000) (0xc000226280) Stream removed, broadcasting: 1\nI0811 12:24:53.041474     863 log.go:172] (0xc000af0000) (0xc0008fac80) Stream removed, broadcasting: 3\nI0811 12:24:53.041479     863 log.go:172] (0xc000af0000) (0xc0007c8000) Stream removed, broadcasting: 5\n"
Aug 11 12:24:53.047: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 11 12:24:53.047: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 11 12:24:53.047: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4926 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 11 12:24:53.241: INFO: stderr: "I0811 12:24:53.166976     882 log.go:172] (0xc00097e9a0) (0xc00086e280) Create stream\nI0811 12:24:53.167058     882 log.go:172] (0xc00097e9a0) (0xc00086e280) Stream added, broadcasting: 1\nI0811 12:24:53.176431     882 log.go:172] (0xc00097e9a0) Reply frame received for 1\nI0811 12:24:53.176486     882 log.go:172] (0xc00097e9a0) (0xc000936000) Create stream\nI0811 12:24:53.176496     882 log.go:172] (0xc00097e9a0) (0xc000936000) Stream added, broadcasting: 3\nI0811 12:24:53.177319     882 log.go:172] (0xc00097e9a0) Reply frame received for 3\nI0811 12:24:53.177356     882 log.go:172] (0xc00097e9a0) (0xc000406b40) Create stream\nI0811 12:24:53.177377     882 log.go:172] (0xc00097e9a0) (0xc000406b40) Stream added, broadcasting: 5\nI0811 12:24:53.178075     882 log.go:172] (0xc00097e9a0) Reply frame received for 5\nI0811 12:24:53.232543     882 log.go:172] (0xc00097e9a0) Data frame received for 3\nI0811 12:24:53.232582     882 log.go:172] (0xc000936000) (3) Data frame handling\nI0811 12:24:53.232593     882 log.go:172] (0xc000936000) (3) Data frame sent\nI0811 12:24:53.232608     882 log.go:172] (0xc00097e9a0) Data frame received for 3\nI0811 12:24:53.232625     882 log.go:172] (0xc000936000) (3) Data frame handling\nI0811 12:24:53.232647     882 log.go:172] (0xc00097e9a0) Data frame received for 5\nI0811 12:24:53.232666     882 log.go:172] (0xc000406b40) (5) Data frame handling\nI0811 12:24:53.232685     882 log.go:172] (0xc000406b40) (5) Data frame sent\nI0811 12:24:53.232700     882 log.go:172] (0xc00097e9a0) Data frame received for 5\nI0811 12:24:53.232708     882 log.go:172] (0xc000406b40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0811 12:24:53.235772     882 log.go:172] (0xc00097e9a0) Data frame received for 1\nI0811 12:24:53.235796     882 log.go:172] (0xc00086e280) (1) Data frame handling\nI0811 12:24:53.235817     882 log.go:172] (0xc00086e280) (1) Data frame sent\nI0811 12:24:53.235836     882 log.go:172] (0xc00097e9a0) (0xc00086e280) Stream removed, broadcasting: 1\nI0811 12:24:53.235854     882 log.go:172] (0xc00097e9a0) Go away received\nI0811 12:24:53.236175     882 log.go:172] (0xc00097e9a0) (0xc00086e280) Stream removed, broadcasting: 1\nI0811 12:24:53.236211     882 log.go:172] (0xc00097e9a0) (0xc000936000) Stream removed, broadcasting: 3\nI0811 12:24:53.236222     882 log.go:172] (0xc00097e9a0) (0xc000406b40) Stream removed, broadcasting: 5\n"
Aug 11 12:24:53.241: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 11 12:24:53.241: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 11 12:24:53.241: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug 11 12:25:23.298: INFO: Deleting all statefulset in ns statefulset-4926
Aug 11 12:25:23.301: INFO: Scaling statefulset ss to 0
Aug 11 12:25:23.310: INFO: Waiting for statefulset status.replicas updated to 0
Aug 11 12:25:23.313: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:25:23.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4926" for this suite.

• [SLOW TEST:104.042 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":92,"skipped":1414,"failed":0}
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:25:23.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-5556
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating stateful set ss in namespace statefulset-5556
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5556
Aug 11 12:25:23.506: INFO: Found 0 stateful pods, waiting for 1
Aug 11 12:25:33.510: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Aug 11 12:25:33.513: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5556 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 11 12:25:33.747: INFO: stderr: "I0811 12:25:33.640459     903 log.go:172] (0xc0002191e0) (0xc0006654a0) Create stream\nI0811 12:25:33.640526     903 log.go:172] (0xc0002191e0) (0xc0006654a0) Stream added, broadcasting: 1\nI0811 12:25:33.643415     903 log.go:172] (0xc0002191e0) Reply frame received for 1\nI0811 12:25:33.643455     903 log.go:172] (0xc0002191e0) (0xc0009d6000) Create stream\nI0811 12:25:33.643465     903 log.go:172] (0xc0002191e0) (0xc0009d6000) Stream added, broadcasting: 3\nI0811 12:25:33.644525     903 log.go:172] (0xc0002191e0) Reply frame received for 3\nI0811 12:25:33.644544     903 log.go:172] (0xc0002191e0) (0xc000665540) Create stream\nI0811 12:25:33.644550     903 log.go:172] (0xc0002191e0) (0xc000665540) Stream added, broadcasting: 5\nI0811 12:25:33.645506     903 log.go:172] (0xc0002191e0) Reply frame received for 5\nI0811 12:25:33.705524     903 log.go:172] (0xc0002191e0) Data frame received for 5\nI0811 12:25:33.705545     903 log.go:172] (0xc000665540) (5) Data frame handling\nI0811 12:25:33.705556     903 log.go:172] (0xc000665540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0811 12:25:33.739971     903 log.go:172] (0xc0002191e0) Data frame received for 3\nI0811 12:25:33.739994     903 log.go:172] (0xc0009d6000) (3) Data frame handling\nI0811 12:25:33.740006     903 log.go:172] (0xc0009d6000) (3) Data frame sent\nI0811 12:25:33.740013     903 log.go:172] (0xc0002191e0) Data frame received for 3\nI0811 12:25:33.740018     903 log.go:172] (0xc0009d6000) (3) Data frame handling\nI0811 12:25:33.740140     903 log.go:172] (0xc0002191e0) Data frame received for 5\nI0811 12:25:33.740153     903 log.go:172] (0xc000665540) (5) Data frame handling\nI0811 12:25:33.742124     903 log.go:172] (0xc0002191e0) Data frame received for 1\nI0811 12:25:33.742146     903 log.go:172] (0xc0006654a0) (1) Data frame handling\nI0811 12:25:33.742156     903 log.go:172] (0xc0006654a0) (1) Data frame sent\nI0811 12:25:33.742168     903 log.go:172] (0xc0002191e0) (0xc0006654a0) Stream removed, broadcasting: 1\nI0811 12:25:33.742268     903 log.go:172] (0xc0002191e0) Go away received\nI0811 12:25:33.742448     903 log.go:172] (0xc0002191e0) (0xc0006654a0) Stream removed, broadcasting: 1\nI0811 12:25:33.742471     903 log.go:172] (0xc0002191e0) (0xc0009d6000) Stream removed, broadcasting: 3\nI0811 12:25:33.742480     903 log.go:172] (0xc0002191e0) (0xc000665540) Stream removed, broadcasting: 5\n"
Aug 11 12:25:33.748: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 11 12:25:33.748: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 11 12:25:33.752: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 11 12:25:43.757: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 11 12:25:43.757: INFO: Waiting for statefulset status.replicas updated to 0
Aug 11 12:25:43.818: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 11 12:25:43.818: INFO: ss-0  kali-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:23 +0000 UTC  }]
Aug 11 12:25:43.818: INFO: 
Aug 11 12:25:43.818: INFO: StatefulSet ss has not reached scale 3, at 1
Aug 11 12:25:44.823: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.949017696s
Aug 11 12:25:45.883: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.944069457s
Aug 11 12:25:47.005: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.88345312s
Aug 11 12:25:48.178: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.761501748s
Aug 11 12:25:49.203: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.589088548s
Aug 11 12:25:50.658: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.56420381s
Aug 11 12:25:51.797: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.109010265s
Aug 11 12:25:52.803: INFO: Verifying statefulset ss doesn't scale past 3 for another 969.865202ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5556
Aug 11 12:25:53.809: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5556 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 11 12:25:54.026: INFO: stderr: "I0811 12:25:53.943142     923 log.go:172] (0xc000756bb0) (0xc000750320) Create stream\nI0811 12:25:53.943226     923 log.go:172] (0xc000756bb0) (0xc000750320) Stream added, broadcasting: 1\nI0811 12:25:53.946293     923 log.go:172] (0xc000756bb0) Reply frame received for 1\nI0811 12:25:53.946346     923 log.go:172] (0xc000756bb0) (0xc0006f8000) Create stream\nI0811 12:25:53.946367     923 log.go:172] (0xc000756bb0) (0xc0006f8000) Stream added, broadcasting: 3\nI0811 12:25:53.947409     923 log.go:172] (0xc000756bb0) Reply frame received for 3\nI0811 12:25:53.947449     923 log.go:172] (0xc000756bb0) (0xc0006f8140) Create stream\nI0811 12:25:53.947468     923 log.go:172] (0xc000756bb0) (0xc0006f8140) Stream added, broadcasting: 5\nI0811 12:25:53.948343     923 log.go:172] (0xc000756bb0) Reply frame received for 5\nI0811 12:25:54.017984     923 log.go:172] (0xc000756bb0) Data frame received for 5\nI0811 12:25:54.018022     923 log.go:172] (0xc0006f8140) (5) Data frame handling\nI0811 12:25:54.018036     923 log.go:172] (0xc0006f8140) (5) Data frame sent\nI0811 12:25:54.018045     923 log.go:172] (0xc000756bb0) Data frame received for 5\nI0811 12:25:54.018054     923 log.go:172] (0xc0006f8140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0811 12:25:54.018082     923 log.go:172] (0xc000756bb0) Data frame received for 3\nI0811 12:25:54.018103     923 log.go:172] (0xc0006f8000) (3) Data frame handling\nI0811 12:25:54.018119     923 log.go:172] (0xc0006f8000) (3) Data frame sent\nI0811 12:25:54.018128     923 log.go:172] (0xc000756bb0) Data frame received for 3\nI0811 12:25:54.018134     923 log.go:172] (0xc0006f8000) (3) Data frame handling\nI0811 12:25:54.019507     923 log.go:172] (0xc000756bb0) Data frame received for 1\nI0811 12:25:54.019522     923 log.go:172] (0xc000750320) (1) Data frame handling\nI0811 12:25:54.019529     923 log.go:172] (0xc000750320) (1) Data frame sent\nI0811 12:25:54.019543     923 log.go:172] (0xc000756bb0) (0xc000750320) Stream removed, broadcasting: 1\nI0811 12:25:54.019569     923 log.go:172] (0xc000756bb0) Go away received\nI0811 12:25:54.019897     923 log.go:172] (0xc000756bb0) (0xc000750320) Stream removed, broadcasting: 1\nI0811 12:25:54.019915     923 log.go:172] (0xc000756bb0) (0xc0006f8000) Stream removed, broadcasting: 3\nI0811 12:25:54.019923     923 log.go:172] (0xc000756bb0) (0xc0006f8140) Stream removed, broadcasting: 5\n"
Aug 11 12:25:54.026: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 11 12:25:54.026: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 11 12:25:54.026: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5556 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 11 12:25:54.233: INFO: stderr: "I0811 12:25:54.152917     943 log.go:172] (0xc00003af20) (0xc0008ba460) Create stream\nI0811 12:25:54.152993     943 log.go:172] (0xc00003af20) (0xc0008ba460) Stream added, broadcasting: 1\nI0811 12:25:54.157431     943 log.go:172] (0xc00003af20) Reply frame received for 1\nI0811 12:25:54.157478     943 log.go:172] (0xc00003af20) (0xc000681720) Create stream\nI0811 12:25:54.157497     943 log.go:172] (0xc00003af20) (0xc000681720) Stream added, broadcasting: 3\nI0811 12:25:54.158404     943 log.go:172] (0xc00003af20) Reply frame received for 3\nI0811 12:25:54.158445     943 log.go:172] (0xc00003af20) (0xc00052eb40) Create stream\nI0811 12:25:54.158463     943 log.go:172] (0xc00003af20) (0xc00052eb40) Stream added, broadcasting: 5\nI0811 12:25:54.159279     943 log.go:172] (0xc00003af20) Reply frame received for 5\nI0811 12:25:54.224138     943 log.go:172] (0xc00003af20) Data frame received for 5\nI0811 12:25:54.224176     943 log.go:172] (0xc00052eb40) (5) Data frame handling\nI0811 12:25:54.224190     943 log.go:172] (0xc00052eb40) (5) Data frame sent\nI0811 12:25:54.224204     943 log.go:172] (0xc00003af20) Data frame received for 5\nI0811 12:25:54.224219     943 log.go:172] (0xc00052eb40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0811 12:25:54.224244     943 log.go:172] (0xc00003af20) Data frame received for 3\nI0811 12:25:54.224270     943 log.go:172] (0xc000681720) (3) Data frame handling\nI0811 12:25:54.224280     943 log.go:172] (0xc000681720) (3) Data frame sent\nI0811 12:25:54.224296     943 log.go:172] (0xc00003af20) Data frame received for 3\nI0811 12:25:54.224311     943 log.go:172] (0xc000681720) (3) Data frame handling\nI0811 12:25:54.225619     943 log.go:172] (0xc00003af20) Data frame received for 1\nI0811 12:25:54.225650     943 log.go:172] (0xc0008ba460) (1) Data frame handling\nI0811 12:25:54.225672     943 log.go:172] (0xc0008ba460) (1) Data frame sent\nI0811 12:25:54.225691     943 log.go:172] (0xc00003af20) (0xc0008ba460) Stream removed, broadcasting: 1\nI0811 12:25:54.225709     943 log.go:172] (0xc00003af20) Go away received\nI0811 12:25:54.226115     943 log.go:172] (0xc00003af20) (0xc0008ba460) Stream removed, broadcasting: 1\nI0811 12:25:54.226135     943 log.go:172] (0xc00003af20) (0xc000681720) Stream removed, broadcasting: 3\nI0811 12:25:54.226147     943 log.go:172] (0xc00003af20) (0xc00052eb40) Stream removed, broadcasting: 5\n"
Aug 11 12:25:54.233: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 11 12:25:54.233: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 11 12:25:54.233: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5556 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 11 12:25:54.438: INFO: stderr: "I0811 12:25:54.348331     967 log.go:172] (0xc0009d6000) (0xc000290140) Create stream\nI0811 12:25:54.348401     967 log.go:172] (0xc0009d6000) (0xc000290140) Stream added, broadcasting: 1\nI0811 12:25:54.351563     967 log.go:172] (0xc0009d6000) Reply frame received for 1\nI0811 12:25:54.351629     967 log.go:172] (0xc0009d6000) (0xc0007da000) Create stream\nI0811 12:25:54.351756     967 log.go:172] (0xc0009d6000) (0xc0007da000) Stream added, broadcasting: 3\nI0811 12:25:54.353622     967 log.go:172] (0xc0009d6000) Reply frame received for 3\nI0811 12:25:54.353657     967 log.go:172] (0xc0009d6000) (0xc0007da0a0) Create stream\nI0811 12:25:54.353674     967 log.go:172] (0xc0009d6000) (0xc0007da0a0) Stream added, broadcasting: 5\nI0811 12:25:54.354414     967 log.go:172] (0xc0009d6000) Reply frame received for 5\nI0811 12:25:54.430133     967 log.go:172] (0xc0009d6000) Data frame received for 5\nI0811 12:25:54.430159     967 log.go:172] (0xc0007da0a0) (5) Data frame handling\nI0811 12:25:54.430166     967 log.go:172] (0xc0007da0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0811 12:25:54.430178     967 log.go:172] (0xc0009d6000) Data frame received for 3\nI0811 12:25:54.430182     967 log.go:172] (0xc0007da000) (3) Data frame handling\nI0811 12:25:54.430191     967 log.go:172] (0xc0007da000) (3) Data frame sent\nI0811 12:25:54.430202     967 log.go:172] (0xc0009d6000) Data frame received for 3\nI0811 12:25:54.430209     967 log.go:172] (0xc0007da000) (3) Data frame handling\nI0811 12:25:54.430405     967 log.go:172] (0xc0009d6000) Data frame received for 5\nI0811 12:25:54.430439     967 log.go:172] (0xc0007da0a0) (5) Data frame handling\nI0811 12:25:54.432091     967 log.go:172] (0xc0009d6000) Data frame received for 1\nI0811 12:25:54.432117     967 log.go:172] (0xc000290140) (1) Data frame handling\nI0811 12:25:54.432135     967 log.go:172] (0xc000290140) (1) Data frame sent\nI0811 12:25:54.432153     967 log.go:172] (0xc0009d6000) (0xc000290140) Stream removed, broadcasting: 1\nI0811 12:25:54.432171     967 log.go:172] (0xc0009d6000) Go away received\nI0811 12:25:54.432442     967 log.go:172] (0xc0009d6000) (0xc000290140) Stream removed, broadcasting: 1\nI0811 12:25:54.432454     967 log.go:172] (0xc0009d6000) (0xc0007da000) Stream removed, broadcasting: 3\nI0811 12:25:54.432458     967 log.go:172] (0xc0009d6000) (0xc0007da0a0) Stream removed, broadcasting: 5\n"
Aug 11 12:25:54.439: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 11 12:25:54.439: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 11 12:25:54.443: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Aug 11 12:26:04.447: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 11 12:26:04.447: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 11 12:26:04.447: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Aug 11 12:26:04.450: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5556 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 11 12:26:04.674: INFO: stderr: "I0811 12:26:04.588147     986 log.go:172] (0xc0003cbd90) (0xc000a74780) Create stream\nI0811 12:26:04.588215     986 log.go:172] (0xc0003cbd90) (0xc000a74780) Stream added, broadcasting: 1\nI0811 12:26:04.591349     986 log.go:172] (0xc0003cbd90) Reply frame received for 1\nI0811 12:26:04.591389     986 log.go:172] (0xc0003cbd90) (0xc000a74820) Create stream\nI0811 12:26:04.591401     986 log.go:172] (0xc0003cbd90) (0xc000a74820) Stream added, broadcasting: 3\nI0811 12:26:04.592407     986 log.go:172] (0xc0003cbd90) Reply frame received for 3\nI0811 12:26:04.592447     986 log.go:172] (0xc0003cbd90) (0xc000a2c000) Create stream\nI0811 12:26:04.592460     986 log.go:172] (0xc0003cbd90) (0xc000a2c000) Stream added, broadcasting: 5\nI0811 12:26:04.593469     986 log.go:172] (0xc0003cbd90) Reply frame received for 5\nI0811 12:26:04.665209     986 log.go:172] (0xc0003cbd90) Data frame received for 5\nI0811 12:26:04.665259     986 log.go:172] (0xc000a2c000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0811 12:26:04.665297     986 log.go:172] (0xc0003cbd90) Data frame received for 3\nI0811 12:26:04.665359     986 log.go:172] (0xc000a74820) (3) Data frame handling\nI0811 12:26:04.665386     986 log.go:172] (0xc000a74820) (3) Data frame sent\nI0811 12:26:04.665400     986 log.go:172] (0xc0003cbd90) Data frame received for 3\nI0811 12:26:04.665423     986 log.go:172] (0xc000a2c000) (5) Data frame sent\nI0811 12:26:04.665447     986 log.go:172] (0xc0003cbd90) Data frame received for 5\nI0811 12:26:04.665454     986 log.go:172] (0xc000a2c000) (5) Data frame handling\nI0811 12:26:04.665469     986 log.go:172] (0xc000a74820) (3) Data frame handling\nI0811 12:26:04.666482     986 log.go:172] (0xc0003cbd90) Data frame received for 1\nI0811 12:26:04.666505     986 log.go:172] (0xc000a74780) (1) Data frame handling\nI0811 12:26:04.666518     986 log.go:172] (0xc000a74780) (1) Data frame sent\nI0811 12:26:04.666534     986 log.go:172] (0xc0003cbd90) (0xc000a74780) Stream removed, broadcasting: 1\nI0811 12:26:04.666551     986 log.go:172] (0xc0003cbd90) Go away received\nI0811 12:26:04.666927     986 log.go:172] (0xc0003cbd90) (0xc000a74780) Stream removed, broadcasting: 1\nI0811 12:26:04.666946     986 log.go:172] (0xc0003cbd90) (0xc000a74820) Stream removed, broadcasting: 3\nI0811 12:26:04.666956     986 log.go:172] (0xc0003cbd90) (0xc000a2c000) Stream removed, broadcasting: 5\n"
Aug 11 12:26:04.674: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 11 12:26:04.674: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 11 12:26:04.674: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5556 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 11 12:26:04.915: INFO: stderr: "I0811 12:26:04.808714    1010 log.go:172] (0xc0009d2bb0) (0xc0005375e0) Create stream\nI0811 12:26:04.808904    1010 log.go:172] (0xc0009d2bb0) (0xc0005375e0) Stream added, broadcasting: 1\nI0811 12:26:04.812034    1010 log.go:172] (0xc0009d2bb0) Reply frame received for 1\nI0811 12:26:04.812094    1010 log.go:172] (0xc0009d2bb0) (0xc00079e000) Create stream\nI0811 12:26:04.812123    1010 log.go:172] (0xc0009d2bb0) (0xc00079e000) Stream added, broadcasting: 3\nI0811 12:26:04.813319    1010 log.go:172] (0xc0009d2bb0) Reply frame received for 3\nI0811 12:26:04.813353    1010 log.go:172] (0xc0009d2bb0) (0xc000537680) Create stream\nI0811 12:26:04.813359    1010 log.go:172] (0xc0009d2bb0) (0xc000537680) Stream added, broadcasting: 5\nI0811 12:26:04.814366    1010 log.go:172] (0xc0009d2bb0) Reply frame received for 5\nI0811 12:26:04.870867    1010 log.go:172] (0xc0009d2bb0) Data frame received for 5\nI0811 12:26:04.870886    1010 log.go:172] (0xc000537680) (5) Data frame handling\nI0811 12:26:04.870897    1010 log.go:172] (0xc000537680) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0811 12:26:04.906799    1010 log.go:172] (0xc0009d2bb0) Data frame received for 3\nI0811 12:26:04.906838    1010 log.go:172] (0xc00079e000) (3) Data frame handling\nI0811 12:26:04.906868    1010 log.go:172] (0xc00079e000) (3) Data frame sent\nI0811 12:26:04.907194    1010 log.go:172] (0xc0009d2bb0) Data frame received for 5\nI0811 12:26:04.907235    1010 log.go:172] (0xc0009d2bb0) Data frame received for 3\nI0811 12:26:04.907280    1010 log.go:172] (0xc00079e000) (3) Data frame handling\nI0811 12:26:04.907313    1010 log.go:172] (0xc000537680) (5) Data frame handling\nI0811 12:26:04.909221    1010 log.go:172] (0xc0009d2bb0) Data frame received for 1\nI0811 12:26:04.909258    1010 log.go:172] (0xc0005375e0) (1) Data frame handling\nI0811 12:26:04.909290    1010 log.go:172] (0xc0005375e0) (1) Data frame sent\nI0811 12:26:04.909316    1010 log.go:172] (0xc0009d2bb0) (0xc0005375e0) Stream removed, broadcasting: 1\nI0811 12:26:04.909465    1010 log.go:172] (0xc0009d2bb0) Go away received\nI0811 12:26:04.909793    1010 log.go:172] (0xc0009d2bb0) (0xc0005375e0) Stream removed, broadcasting: 1\nI0811 12:26:04.909818    1010 log.go:172] (0xc0009d2bb0) (0xc00079e000) Stream removed, broadcasting: 3\nI0811 12:26:04.909834    1010 log.go:172] (0xc0009d2bb0) (0xc000537680) Stream removed, broadcasting: 5\n"
Aug 11 12:26:04.915: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 11 12:26:04.915: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 11 12:26:04.916: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5556 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 11 12:26:05.145: INFO: stderr: "I0811 12:26:05.048390    1031 log.go:172] (0xc000a6c0b0) (0xc000a2e1e0) Create stream\nI0811 12:26:05.048462    1031 log.go:172] (0xc000a6c0b0) (0xc000a2e1e0) Stream added, broadcasting: 1\nI0811 12:26:05.051864    1031 log.go:172] (0xc000a6c0b0) Reply frame received for 1\nI0811 12:26:05.051905    1031 log.go:172] (0xc000a6c0b0) (0xc0005d75e0) Create stream\nI0811 12:26:05.051918    1031 log.go:172] (0xc000a6c0b0) (0xc0005d75e0) Stream added, broadcasting: 3\nI0811 12:26:05.052599    1031 log.go:172] (0xc000a6c0b0) Reply frame received for 3\nI0811 12:26:05.052629    1031 log.go:172] (0xc000a6c0b0) (0xc00034aa00) Create stream\nI0811 12:26:05.052639    1031 log.go:172] (0xc000a6c0b0) (0xc00034aa00) Stream added, broadcasting: 5\nI0811 12:26:05.053494    1031 log.go:172] (0xc000a6c0b0) Reply frame received for 5\nI0811 12:26:05.109267    1031 log.go:172] (0xc000a6c0b0) Data frame received for 5\nI0811 12:26:05.109311    1031 log.go:172] (0xc00034aa00) (5) Data frame handling\nI0811 12:26:05.109335    1031 log.go:172] (0xc00034aa00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0811 12:26:05.135275    1031 log.go:172] (0xc000a6c0b0) Data frame received for 3\nI0811 12:26:05.135300    1031 log.go:172] (0xc0005d75e0) (3) Data frame handling\nI0811 12:26:05.135310    1031 log.go:172] (0xc0005d75e0) (3) Data frame sent\nI0811 12:26:05.135317    1031 log.go:172] (0xc000a6c0b0) Data frame received for 3\nI0811 12:26:05.135324    1031 log.go:172] (0xc0005d75e0) (3) Data frame handling\nI0811 12:26:05.135970    1031 log.go:172] (0xc000a6c0b0) Data frame received for 5\nI0811 12:26:05.135988    1031 log.go:172] (0xc00034aa00) (5) Data frame handling\nI0811 12:26:05.137763    1031 log.go:172] (0xc000a6c0b0) Data frame received for 1\nI0811 12:26:05.137792    1031 log.go:172] (0xc000a2e1e0) (1) Data frame handling\nI0811 12:26:05.137826    1031 log.go:172] (0xc000a2e1e0) (1) Data frame sent\nI0811 12:26:05.137841    1031 log.go:172] (0xc000a6c0b0) (0xc000a2e1e0) Stream removed, broadcasting: 1\nI0811 12:26:05.137966    1031 log.go:172] (0xc000a6c0b0) Go away received\nI0811 12:26:05.138153    1031 log.go:172] (0xc000a6c0b0) (0xc000a2e1e0) Stream removed, broadcasting: 1\nI0811 12:26:05.138178    1031 log.go:172] (0xc000a6c0b0) (0xc0005d75e0) Stream removed, broadcasting: 3\nI0811 12:26:05.138188    1031 log.go:172] (0xc000a6c0b0) (0xc00034aa00) Stream removed, broadcasting: 5\n"
Aug 11 12:26:05.145: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 11 12:26:05.145: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 11 12:26:05.145: INFO: Waiting for statefulset status.replicas updated to 0
Aug 11 12:26:05.148: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Aug 11 12:26:15.154: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 11 12:26:15.154: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 11 12:26:15.154: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 11 12:26:15.172: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 11 12:26:15.172: INFO: ss-0  kali-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:23 +0000 UTC  }]
Aug 11 12:26:15.172: INFO: ss-1  kali-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  }]
Aug 11 12:26:15.172: INFO: ss-2  kali-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  }]
Aug 11 12:26:15.172: INFO: 
Aug 11 12:26:15.172: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 11 12:26:16.177: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 11 12:26:16.178: INFO: ss-0  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:23 +0000 UTC  }]
Aug 11 12:26:16.178: INFO: ss-1  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  }]
Aug 11 12:26:16.178: INFO: ss-2  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  }]
Aug 11 12:26:16.178: INFO: 
Aug 11 12:26:16.178: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 11 12:26:17.237: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 11 12:26:17.237: INFO: ss-0  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:23 +0000 UTC  }]
Aug 11 12:26:17.237: INFO: ss-1  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  }]
Aug 11 12:26:17.237: INFO: ss-2  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  }]
Aug 11 12:26:17.237: INFO: 
Aug 11 12:26:17.237: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 11 12:26:18.247: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 11 12:26:18.247: INFO: ss-0  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:23 +0000 UTC  }]
Aug 11 12:26:18.247: INFO: ss-1  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  }]
Aug 11 12:26:18.247: INFO: ss-2  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  }]
Aug 11 12:26:18.247: INFO: 
Aug 11 12:26:18.247: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 11 12:26:19.251: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 11 12:26:19.251: INFO: ss-0  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:23 +0000 UTC  }]
Aug 11 12:26:19.251: INFO: ss-1  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  }]
Aug 11 12:26:19.251: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  }]
Aug 11 12:26:19.251: INFO: 
Aug 11 12:26:19.251: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 11 12:26:20.255: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 11 12:26:20.255: INFO: ss-0  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:23 +0000 UTC  }]
Aug 11 12:26:20.255: INFO: ss-1  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  }]
Aug 11 12:26:20.255: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  }]
Aug 11 12:26:20.255: INFO: 
Aug 11 12:26:20.255: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 11 12:26:21.260: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 11 12:26:21.260: INFO: ss-0  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:23 +0000 UTC  }]
Aug 11 12:26:21.260: INFO: ss-1  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  }]
Aug 11 12:26:21.260: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  }]
Aug 11 12:26:21.260: INFO: 
Aug 11 12:26:21.260: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 11 12:26:22.265: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 11 12:26:22.265: INFO: ss-0  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:23 +0000 UTC  }]
Aug 11 12:26:22.265: INFO: ss-1  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  }]
Aug 11 12:26:22.265: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  }]
Aug 11 12:26:22.265: INFO: 
Aug 11 12:26:22.265: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 11 12:26:23.269: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 11 12:26:23.269: INFO: ss-0  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:23 +0000 UTC  }]
Aug 11 12:26:23.270: INFO: ss-1  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  }]
Aug 11 12:26:23.270: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:26:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 12:25:43 +0000 UTC  }]
Aug 11 12:26:23.270: INFO: 
Aug 11 12:26:23.270: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 11 12:26:24.274: INFO: Verifying statefulset ss doesn't scale past 0 for another 892.106955ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5556
Aug 11 12:26:25.278: INFO: Scaling statefulset ss to 0
Aug 11 12:26:25.286: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug 11 12:26:25.288: INFO: Deleting all statefulset in ns statefulset-5556
Aug 11 12:26:25.290: INFO: Scaling statefulset ss to 0
Aug 11 12:26:25.297: INFO: Waiting for statefulset status.replicas updated to 0
Aug 11 12:26:25.299: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:26:25.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5556" for this suite.

• [SLOW TEST:61.987 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":93,"skipped":1414,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:26:25.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 12:26:25.397: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:26:26.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2807" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":275,"completed":94,"skipped":1425,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:26:26.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1404.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1404.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 11 12:26:34.274: INFO: DNS probes using dns-1404/dns-test-8e0a5b92-d65b-44ba-8096-5d5269c36006 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:26:34.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1404" for this suite.

• [SLOW TEST:8.340 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":275,"completed":95,"skipped":1473,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:26:34.409: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Aug 11 12:26:35.148: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 11 12:26:35.172: INFO: Waiting for terminating namespaces to be deleted...
Aug 11 12:26:35.174: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Aug 11 12:26:35.193: INFO: rally-19e4df10-30wkw9yu-glqpf from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded)
Aug 11 12:26:35.193: INFO: 	Container rally-19e4df10-30wkw9yu ready: true, restart count 0
Aug 11 12:26:35.193: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded)
Aug 11 12:26:35.193: INFO: 	Container kindnet-cni ready: true, restart count 1
Aug 11 12:26:35.193: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded)
Aug 11 12:26:35.193: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 11 12:26:35.193: INFO: rally-824618b1-6cukkjuh-lb7rq from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:26 +0000 UTC (1 container statuses recorded)
Aug 11 12:26:35.193: INFO: 	Container rally-824618b1-6cukkjuh ready: true, restart count 3
Aug 11 12:26:35.193: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Aug 11 12:26:35.211: INFO: rally-19e4df10-30wkw9yu-qbmr7 from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded)
Aug 11 12:26:35.211: INFO: 	Container rally-19e4df10-30wkw9yu ready: true, restart count 0
Aug 11 12:26:35.211: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Aug 11 12:26:35.211: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 11 12:26:35.212: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Aug 11 12:26:35.212: INFO: 	Container kindnet-cni ready: true, restart count 1
Aug 11 12:26:35.212: INFO: rally-824618b1-6cukkjuh-m84l4 from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:24 +0000 UTC (1 container statuses recorded)
Aug 11 12:26:35.212: INFO: 	Container rally-824618b1-6cukkjuh ready: true, restart count 3
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: verifying the node has the label node kali-worker
STEP: verifying the node has the label node kali-worker2
Aug 11 12:26:35.974: INFO: Pod rally-19e4df10-30wkw9yu-glqpf requesting resource cpu=0m on Node kali-worker
Aug 11 12:26:35.974: INFO: Pod rally-19e4df10-30wkw9yu-qbmr7 requesting resource cpu=0m on Node kali-worker2
Aug 11 12:26:35.974: INFO: Pod rally-824618b1-6cukkjuh-lb7rq requesting resource cpu=0m on Node kali-worker
Aug 11 12:26:35.974: INFO: Pod rally-824618b1-6cukkjuh-m84l4 requesting resource cpu=0m on Node kali-worker2
Aug 11 12:26:35.974: INFO: Pod kindnet-njbgt requesting resource cpu=100m on Node kali-worker
Aug 11 12:26:35.974: INFO: Pod kindnet-pk4xb requesting resource cpu=100m on Node kali-worker2
Aug 11 12:26:35.974: INFO: Pod kube-proxy-qwsfx requesting resource cpu=0m on Node kali-worker
Aug 11 12:26:35.974: INFO: Pod kube-proxy-vk6jr requesting resource cpu=0m on Node kali-worker2
STEP: Starting Pods to consume most of the cluster CPU.
Aug 11 12:26:35.974: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker
Aug 11 12:26:36.016: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker2
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-440ea78f-a3dc-423c-bee0-68f3e06c58fd.162a365e529d2f15], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3103/filler-pod-440ea78f-a3dc-423c-bee0-68f3e06c58fd to kali-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-440ea78f-a3dc-423c-bee0-68f3e06c58fd.162a365f17cd30f1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-440ea78f-a3dc-423c-bee0-68f3e06c58fd.162a365fce683a20], Reason = [Created], Message = [Created container filler-pod-440ea78f-a3dc-423c-bee0-68f3e06c58fd]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-440ea78f-a3dc-423c-bee0-68f3e06c58fd.162a365fe113d418], Reason = [Started], Message = [Started container filler-pod-440ea78f-a3dc-423c-bee0-68f3e06c58fd]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b6dfe996-f22f-4b8a-93fb-ff03beffbc10.162a365e3c72bd88], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3103/filler-pod-b6dfe996-f22f-4b8a-93fb-ff03beffbc10 to kali-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b6dfe996-f22f-4b8a-93fb-ff03beffbc10.162a365eea48449b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b6dfe996-f22f-4b8a-93fb-ff03beffbc10.162a365f84f2980d], Reason = [Created], Message = [Created container filler-pod-b6dfe996-f22f-4b8a-93fb-ff03beffbc10]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b6dfe996-f22f-4b8a-93fb-ff03beffbc10.162a365fbf4c176f], Reason = [Started], Message = [Started container filler-pod-b6dfe996-f22f-4b8a-93fb-ff03beffbc10]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.162a366027ef97ae], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.162a36602dda6a27], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node kali-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node kali-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:26:45.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3103" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:10.923 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":275,"completed":96,"skipped":1502,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:26:45.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 11 12:26:45.408: INFO: Waiting up to 5m0s for pod "downwardapi-volume-099433df-8e39-48d6-b0b4-52db304ffca7" in namespace "projected-9841" to be "Succeeded or Failed"
Aug 11 12:26:45.424: INFO: Pod "downwardapi-volume-099433df-8e39-48d6-b0b4-52db304ffca7": Phase="Pending", Reason="", readiness=false. Elapsed: 15.826087ms
Aug 11 12:26:47.428: INFO: Pod "downwardapi-volume-099433df-8e39-48d6-b0b4-52db304ffca7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020388368s
Aug 11 12:26:49.433: INFO: Pod "downwardapi-volume-099433df-8e39-48d6-b0b4-52db304ffca7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024855651s
STEP: Saw pod success
Aug 11 12:26:49.433: INFO: Pod "downwardapi-volume-099433df-8e39-48d6-b0b4-52db304ffca7" satisfied condition "Succeeded or Failed"
Aug 11 12:26:49.436: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-099433df-8e39-48d6-b0b4-52db304ffca7 container client-container: 
STEP: delete the pod
Aug 11 12:26:49.712: INFO: Waiting for pod downwardapi-volume-099433df-8e39-48d6-b0b4-52db304ffca7 to disappear
Aug 11 12:26:49.750: INFO: Pod downwardapi-volume-099433df-8e39-48d6-b0b4-52db304ffca7 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:26:49.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9841" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":97,"skipped":1507,"failed":0}
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:26:49.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Aug 11 12:26:49.938: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:27:02.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4481" for this suite.

• [SLOW TEST:12.718 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":98,"skipped":1508,"failed":0}
SSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:27:02.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:27:07.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1880" for this suite.

• [SLOW TEST:5.529 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":99,"skipped":1513,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:27:08.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-upd-2f481bc4-b898-47e0-b4a5-fd563b4af9e3
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-2f481bc4-b898-47e0-b4a5-fd563b4af9e3
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:28:27.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4317" for this suite.

• [SLOW TEST:79.919 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":100,"skipped":1528,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:28:27.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 12:28:27.976: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:28:29.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1238" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":275,"completed":101,"skipped":1537,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:28:29.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Aug 11 12:28:35.865: INFO: Successfully updated pod "labelsupdateb109f094-c22d-4ef6-bb68-c7dc8db3d7dc"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:28:37.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4175" for this suite.

• [SLOW TEST:8.725 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":102,"skipped":1560,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:28:37.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:28:37.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-9339" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":103,"skipped":1571,"failed":0}
SSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:28:37.970: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test substitution in container's args
Aug 11 12:28:38.045: INFO: Waiting up to 5m0s for pod "var-expansion-f6a8c827-5179-4c9b-bb24-e54742bd6cca" in namespace "var-expansion-5315" to be "Succeeded or Failed"
Aug 11 12:28:38.050: INFO: Pod "var-expansion-f6a8c827-5179-4c9b-bb24-e54742bd6cca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.314647ms
Aug 11 12:28:40.054: INFO: Pod "var-expansion-f6a8c827-5179-4c9b-bb24-e54742bd6cca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008560665s
Aug 11 12:28:42.058: INFO: Pod "var-expansion-f6a8c827-5179-4c9b-bb24-e54742bd6cca": Phase="Running", Reason="", readiness=true. Elapsed: 4.012691649s
Aug 11 12:28:44.062: INFO: Pod "var-expansion-f6a8c827-5179-4c9b-bb24-e54742bd6cca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016830718s
STEP: Saw pod success
Aug 11 12:28:44.062: INFO: Pod "var-expansion-f6a8c827-5179-4c9b-bb24-e54742bd6cca" satisfied condition "Succeeded or Failed"
Aug 11 12:28:44.065: INFO: Trying to get logs from node kali-worker pod var-expansion-f6a8c827-5179-4c9b-bb24-e54742bd6cca container dapi-container: 
STEP: delete the pod
Aug 11 12:28:44.122: INFO: Waiting for pod var-expansion-f6a8c827-5179-4c9b-bb24-e54742bd6cca to disappear
Aug 11 12:28:44.135: INFO: Pod var-expansion-f6a8c827-5179-4c9b-bb24-e54742bd6cca no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:28:44.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5315" for this suite.

• [SLOW TEST:6.173 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":104,"skipped":1575,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:28:44.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-00220b5d-bae8-450c-a80d-3e61b93fde81
STEP: Creating a pod to test consume secrets
Aug 11 12:28:44.318: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-06150298-0d1c-40dc-aef4-8276787450bf" in namespace "projected-6225" to be "Succeeded or Failed"
Aug 11 12:28:44.333: INFO: Pod "pod-projected-secrets-06150298-0d1c-40dc-aef4-8276787450bf": Phase="Pending", Reason="", readiness=false. Elapsed: 14.742367ms
Aug 11 12:28:46.337: INFO: Pod "pod-projected-secrets-06150298-0d1c-40dc-aef4-8276787450bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019234706s
Aug 11 12:28:48.342: INFO: Pod "pod-projected-secrets-06150298-0d1c-40dc-aef4-8276787450bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0239258s
STEP: Saw pod success
Aug 11 12:28:48.342: INFO: Pod "pod-projected-secrets-06150298-0d1c-40dc-aef4-8276787450bf" satisfied condition "Succeeded or Failed"
Aug 11 12:28:48.345: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-06150298-0d1c-40dc-aef4-8276787450bf container projected-secret-volume-test: 
STEP: delete the pod
Aug 11 12:28:48.394: INFO: Waiting for pod pod-projected-secrets-06150298-0d1c-40dc-aef4-8276787450bf to disappear
Aug 11 12:28:48.398: INFO: Pod pod-projected-secrets-06150298-0d1c-40dc-aef4-8276787450bf no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:28:48.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6225" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":105,"skipped":1595,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:28:48.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Aug 11 12:28:48.447: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:28:54.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9721" for this suite.

• [SLOW TEST:6.086 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":106,"skipped":1614,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:28:54.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating Agnhost RC
Aug 11 12:28:54.543: INFO: namespace kubectl-7705
Aug 11 12:28:54.543: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7705'
Aug 11 12:28:58.044: INFO: stderr: ""
Aug 11 12:28:58.044: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 11 12:28:59.049: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 11 12:28:59.049: INFO: Found 0 / 1
Aug 11 12:29:00.082: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 11 12:29:00.082: INFO: Found 0 / 1
Aug 11 12:29:01.080: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 11 12:29:01.080: INFO: Found 0 / 1
Aug 11 12:29:02.058: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 11 12:29:02.058: INFO: Found 1 / 1
Aug 11 12:29:02.058: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 11 12:29:02.061: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 11 12:29:02.061: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 11 12:29:02.061: INFO: wait on agnhost-master startup in kubectl-7705 
Aug 11 12:29:02.061: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs agnhost-master-bggbg agnhost-master --namespace=kubectl-7705'
Aug 11 12:29:02.175: INFO: stderr: ""
Aug 11 12:29:02.175: INFO: stdout: "Paused\n"
STEP: exposing RC
Aug 11 12:29:02.175: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7705'
Aug 11 12:29:02.328: INFO: stderr: ""
Aug 11 12:29:02.328: INFO: stdout: "service/rm2 exposed\n"
Aug 11 12:29:02.344: INFO: Service rm2 in namespace kubectl-7705 found.
STEP: exposing service
Aug 11 12:29:04.352: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7705'
Aug 11 12:29:04.540: INFO: stderr: ""
Aug 11 12:29:04.540: INFO: stdout: "service/rm3 exposed\n"
Aug 11 12:29:04.549: INFO: Service rm3 in namespace kubectl-7705 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:29:06.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7705" for this suite.

• [SLOW TEST:12.073 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":275,"completed":107,"skipped":1622,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:29:06.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 12:29:06.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 11 12:29:08.604: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5717 create -f -'
Aug 11 12:29:15.340: INFO: stderr: ""
Aug 11 12:29:15.340: INFO: stdout: "e2e-test-crd-publish-openapi-2232-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Aug 11 12:29:15.340: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5717 delete e2e-test-crd-publish-openapi-2232-crds test-cr'
Aug 11 12:29:15.464: INFO: stderr: ""
Aug 11 12:29:15.464: INFO: stdout: "e2e-test-crd-publish-openapi-2232-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Aug 11 12:29:15.464: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5717 apply -f -'
Aug 11 12:29:15.749: INFO: stderr: ""
Aug 11 12:29:15.749: INFO: stdout: "e2e-test-crd-publish-openapi-2232-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Aug 11 12:29:15.749: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5717 delete e2e-test-crd-publish-openapi-2232-crds test-cr'
Aug 11 12:29:15.859: INFO: stderr: ""
Aug 11 12:29:15.859: INFO: stdout: "e2e-test-crd-publish-openapi-2232-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Aug 11 12:29:15.859: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2232-crds'
Aug 11 12:29:16.125: INFO: stderr: ""
Aug 11 12:29:16.125: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2232-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:29:18.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5717" for this suite.

• [SLOW TEST:11.496 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":108,"skipped":1631,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:29:18.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 11 12:29:18.179: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fa698fef-f0c0-4b0f-8c95-1406a6f2caf3" in namespace "downward-api-4844" to be "Succeeded or Failed"
Aug 11 12:29:18.215: INFO: Pod "downwardapi-volume-fa698fef-f0c0-4b0f-8c95-1406a6f2caf3": Phase="Pending", Reason="", readiness=false. Elapsed: 35.177964ms
Aug 11 12:29:20.244: INFO: Pod "downwardapi-volume-fa698fef-f0c0-4b0f-8c95-1406a6f2caf3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064591005s
Aug 11 12:29:22.248: INFO: Pod "downwardapi-volume-fa698fef-f0c0-4b0f-8c95-1406a6f2caf3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068499541s
Aug 11 12:29:24.253: INFO: Pod "downwardapi-volume-fa698fef-f0c0-4b0f-8c95-1406a6f2caf3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.073311029s
STEP: Saw pod success
Aug 11 12:29:24.253: INFO: Pod "downwardapi-volume-fa698fef-f0c0-4b0f-8c95-1406a6f2caf3" satisfied condition "Succeeded or Failed"
Aug 11 12:29:24.256: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-fa698fef-f0c0-4b0f-8c95-1406a6f2caf3 container client-container: 
STEP: delete the pod
Aug 11 12:29:24.288: INFO: Waiting for pod downwardapi-volume-fa698fef-f0c0-4b0f-8c95-1406a6f2caf3 to disappear
Aug 11 12:29:24.333: INFO: Pod downwardapi-volume-fa698fef-f0c0-4b0f-8c95-1406a6f2caf3 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:29:24.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4844" for this suite.

• [SLOW TEST:6.290 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":109,"skipped":1696,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:29:24.354: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-map-6a02dc73-3ddb-422b-98c7-e307c2b6da18
STEP: Creating a pod to test consume secrets
Aug 11 12:29:24.432: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3bfbc441-f80e-4d22-991e-67a0bfc91b3a" in namespace "projected-1346" to be "Succeeded or Failed"
Aug 11 12:29:24.470: INFO: Pod "pod-projected-secrets-3bfbc441-f80e-4d22-991e-67a0bfc91b3a": Phase="Pending", Reason="", readiness=false. Elapsed: 38.858943ms
Aug 11 12:29:26.475: INFO: Pod "pod-projected-secrets-3bfbc441-f80e-4d22-991e-67a0bfc91b3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043202601s
Aug 11 12:29:28.483: INFO: Pod "pod-projected-secrets-3bfbc441-f80e-4d22-991e-67a0bfc91b3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050988882s
STEP: Saw pod success
Aug 11 12:29:28.483: INFO: Pod "pod-projected-secrets-3bfbc441-f80e-4d22-991e-67a0bfc91b3a" satisfied condition "Succeeded or Failed"
Aug 11 12:29:28.484: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-3bfbc441-f80e-4d22-991e-67a0bfc91b3a container projected-secret-volume-test: 
STEP: delete the pod
Aug 11 12:29:28.530: INFO: Waiting for pod pod-projected-secrets-3bfbc441-f80e-4d22-991e-67a0bfc91b3a to disappear
Aug 11 12:29:28.566: INFO: Pod pod-projected-secrets-3bfbc441-f80e-4d22-991e-67a0bfc91b3a no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:29:28.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1346" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":110,"skipped":1703,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:29:28.573: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:29:33.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8859" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":111,"skipped":1762,"failed":0}

------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:29:33.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 12:29:33.507: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"efc1ee63-d57a-4246-8eea-9203615bf6de", Controller:(*bool)(0xc00309ed82), BlockOwnerDeletion:(*bool)(0xc00309ed83)}}
Aug 11 12:29:33.568: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"8e0603a3-114d-4743-92c5-134e87ee8a47", Controller:(*bool)(0xc0029e76c2), BlockOwnerDeletion:(*bool)(0xc0029e76c3)}}
Aug 11 12:29:33.621: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"ff9c6c06-8af8-4329-90bf-71e9afc39a69", Controller:(*bool)(0xc004deff12), BlockOwnerDeletion:(*bool)(0xc004deff13)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:29:38.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7560" for this suite.

• [SLOW TEST:5.360 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":112,"skipped":1762,"failed":0}
SSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:29:38.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:29:42.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4760" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":113,"skipped":1767,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:29:42.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 12:29:42.952: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Aug 11 12:29:42.981: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:29:42.996: INFO: Number of nodes with available pods: 0
Aug 11 12:29:42.996: INFO: Node kali-worker is running more than one daemon pod
Aug 11 12:29:44.029: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:29:44.032: INFO: Number of nodes with available pods: 0
Aug 11 12:29:44.032: INFO: Node kali-worker is running more than one daemon pod
Aug 11 12:29:45.001: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:29:45.006: INFO: Number of nodes with available pods: 0
Aug 11 12:29:45.006: INFO: Node kali-worker is running more than one daemon pod
Aug 11 12:29:46.102: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:29:46.105: INFO: Number of nodes with available pods: 0
Aug 11 12:29:46.105: INFO: Node kali-worker is running more than one daemon pod
Aug 11 12:29:47.001: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:29:47.004: INFO: Number of nodes with available pods: 1
Aug 11 12:29:47.004: INFO: Node kali-worker2 is running more than one daemon pod
Aug 11 12:29:48.058: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:29:48.062: INFO: Number of nodes with available pods: 2
Aug 11 12:29:48.062: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Aug 11 12:29:48.621: INFO: Wrong image for pod: daemon-set-bxwfh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 11 12:29:48.621: INFO: Wrong image for pod: daemon-set-hf5c8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 11 12:29:48.669: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:29:49.771: INFO: Wrong image for pod: daemon-set-bxwfh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 11 12:29:49.771: INFO: Wrong image for pod: daemon-set-hf5c8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 11 12:29:49.777: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:29:50.686: INFO: Wrong image for pod: daemon-set-bxwfh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 11 12:29:50.687: INFO: Wrong image for pod: daemon-set-hf5c8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 11 12:29:50.690: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:29:51.674: INFO: Wrong image for pod: daemon-set-bxwfh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 11 12:29:51.674: INFO: Pod daemon-set-bxwfh is not available
Aug 11 12:29:51.674: INFO: Wrong image for pod: daemon-set-hf5c8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 11 12:29:51.689: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:29:52.672: INFO: Wrong image for pod: daemon-set-hf5c8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 11 12:29:52.672: INFO: Pod daemon-set-w9brn is not available
Aug 11 12:29:52.676: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:29:53.723: INFO: Wrong image for pod: daemon-set-hf5c8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 11 12:29:53.723: INFO: Pod daemon-set-w9brn is not available
Aug 11 12:29:53.728: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:29:54.741: INFO: Wrong image for pod: daemon-set-hf5c8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 11 12:29:54.741: INFO: Pod daemon-set-w9brn is not available
Aug 11 12:29:54.745: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:29:55.744: INFO: Wrong image for pod: daemon-set-hf5c8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 11 12:29:55.748: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:29:56.807: INFO: Wrong image for pod: daemon-set-hf5c8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 11 12:29:56.812: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:29:57.675: INFO: Wrong image for pod: daemon-set-hf5c8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 11 12:29:57.680: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:29:58.681: INFO: Wrong image for pod: daemon-set-hf5c8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 11 12:29:58.681: INFO: Pod daemon-set-hf5c8 is not available
Aug 11 12:29:58.685: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:29:59.693: INFO: Wrong image for pod: daemon-set-hf5c8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 11 12:29:59.693: INFO: Pod daemon-set-hf5c8 is not available
Aug 11 12:29:59.697: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:30:00.673: INFO: Wrong image for pod: daemon-set-hf5c8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 11 12:30:00.673: INFO: Pod daemon-set-hf5c8 is not available
Aug 11 12:30:00.678: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:30:01.675: INFO: Wrong image for pod: daemon-set-hf5c8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 11 12:30:01.675: INFO: Pod daemon-set-hf5c8 is not available
Aug 11 12:30:01.681: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:30:02.706: INFO: Wrong image for pod: daemon-set-hf5c8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 11 12:30:02.706: INFO: Pod daemon-set-hf5c8 is not available
Aug 11 12:30:02.710: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:30:03.674: INFO: Pod daemon-set-wtbzf is not available
Aug 11 12:30:03.679: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Aug 11 12:30:03.683: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:30:03.686: INFO: Number of nodes with available pods: 1
Aug 11 12:30:03.686: INFO: Node kali-worker2 is running more than one daemon pod
Aug 11 12:30:04.712: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:30:04.714: INFO: Number of nodes with available pods: 1
Aug 11 12:30:04.714: INFO: Node kali-worker2 is running more than one daemon pod
Aug 11 12:30:05.692: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:30:05.696: INFO: Number of nodes with available pods: 1
Aug 11 12:30:05.696: INFO: Node kali-worker2 is running more than one daemon pod
Aug 11 12:30:06.692: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:30:06.696: INFO: Number of nodes with available pods: 1
Aug 11 12:30:06.696: INFO: Node kali-worker2 is running more than one daemon pod
Aug 11 12:30:07.693: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:30:07.697: INFO: Number of nodes with available pods: 1
Aug 11 12:30:07.697: INFO: Node kali-worker2 is running more than one daemon pod
Aug 11 12:30:08.691: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:30:08.694: INFO: Number of nodes with available pods: 2
Aug 11 12:30:08.694: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8444, will wait for the garbage collector to delete the pods
Aug 11 12:30:08.766: INFO: Deleting DaemonSet.extensions daemon-set took: 5.276887ms
Aug 11 12:30:09.066: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.290656ms
Aug 11 12:30:11.869: INFO: Number of nodes with available pods: 0
Aug 11 12:30:11.869: INFO: Number of running nodes: 0, number of available pods: 0
Aug 11 12:30:11.872: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8444/daemonsets","resourceVersion":"8562101"},"items":null}

Aug 11 12:30:11.875: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8444/pods","resourceVersion":"8562101"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:30:11.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8444" for this suite.

• [SLOW TEST:29.076 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":114,"skipped":1776,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:30:11.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 12:30:11.981: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:30:13.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8276" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":275,"completed":115,"skipped":1790,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:30:13.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 11 12:30:13.914: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 11 12:30:15.926: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732745813, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732745813, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732745814, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732745813, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:30:17.941: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732745813, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732745813, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732745814, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732745813, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 11 12:30:21.136: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:30:31.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5828" for this suite.
STEP: Destroying namespace "webhook-5828-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:18.536 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":116,"skipped":1825,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:30:31.555: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Aug 11 12:30:36.208: INFO: Successfully updated pod "annotationupdate13e669bc-3150-4b0d-99e1-a70883e0c838"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:30:40.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-589" for this suite.

• [SLOW TEST:8.735 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":117,"skipped":1842,"failed":0}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:30:40.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 11 12:30:40.387: INFO: Waiting up to 5m0s for pod "downwardapi-volume-abda6507-b525-45c3-afd5-33b736a98ee1" in namespace "projected-5357" to be "Succeeded or Failed"
Aug 11 12:30:40.403: INFO: Pod "downwardapi-volume-abda6507-b525-45c3-afd5-33b736a98ee1": Phase="Pending", Reason="", readiness=false. Elapsed: 16.013295ms
Aug 11 12:30:42.407: INFO: Pod "downwardapi-volume-abda6507-b525-45c3-afd5-33b736a98ee1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020006045s
Aug 11 12:30:44.410: INFO: Pod "downwardapi-volume-abda6507-b525-45c3-afd5-33b736a98ee1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023124422s
STEP: Saw pod success
Aug 11 12:30:44.410: INFO: Pod "downwardapi-volume-abda6507-b525-45c3-afd5-33b736a98ee1" satisfied condition "Succeeded or Failed"
Aug 11 12:30:44.412: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-abda6507-b525-45c3-afd5-33b736a98ee1 container client-container: 
STEP: delete the pod
Aug 11 12:30:44.521: INFO: Waiting for pod downwardapi-volume-abda6507-b525-45c3-afd5-33b736a98ee1 to disappear
Aug 11 12:30:44.545: INFO: Pod downwardapi-volume-abda6507-b525-45c3-afd5-33b736a98ee1 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:30:44.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5357" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":118,"skipped":1845,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:30:44.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Aug 11 12:30:44.649: INFO: PodSpec: initContainers in spec.initContainers
Aug 11 12:31:35.361: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-48a85976-d93e-4e51-9671-3f47a08f225d", GenerateName:"", Namespace:"init-container-6112", SelfLink:"/api/v1/namespaces/init-container-6112/pods/pod-init-48a85976-d93e-4e51-9671-3f47a08f225d", UID:"f7a1e5ea-97a3-47bf-8f12-a2cd846a915d", ResourceVersion:"8562542", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63732745844, loc:(*time.Location)(0x7b220e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"649557753"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002ac46e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002ac4700)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002ac4720), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002ac4740)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-rg4d8", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc003099840), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rg4d8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rg4d8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rg4d8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00502eff8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kali-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002ce18f0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00502f080)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00502f0a0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00502f0a8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00502f0ac), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732745844, loc:(*time.Location)(0x7b220e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732745844, loc:(*time.Location)(0x7b220e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732745844, loc:(*time.Location)(0x7b220e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732745844, loc:(*time.Location)(0x7b220e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.15", PodIP:"10.244.1.79", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.79"}}, StartTime:(*v1.Time)(0xc002ac4760), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002ce19d0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002ce1a40)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://59ff670ef95d3f2b7c4da984f486644468048c277cc9a0c2d4e39108f311a3b2", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002ac47a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002ac4780), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc00502f17f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:31:35.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6112" for this suite.

• [SLOW TEST:50.935 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":119,"skipped":1861,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:31:35.489: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating secret secrets-9662/secret-test-b29042fa-5801-4535-8f21-fd133adae9e7
STEP: Creating a pod to test consume secrets
Aug 11 12:31:35.620: INFO: Waiting up to 5m0s for pod "pod-configmaps-40dddc2e-6a76-4fa6-87b2-c48e1ed95fba" in namespace "secrets-9662" to be "Succeeded or Failed"
Aug 11 12:31:35.630: INFO: Pod "pod-configmaps-40dddc2e-6a76-4fa6-87b2-c48e1ed95fba": Phase="Pending", Reason="", readiness=false. Elapsed: 10.503321ms
Aug 11 12:31:37.635: INFO: Pod "pod-configmaps-40dddc2e-6a76-4fa6-87b2-c48e1ed95fba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014847175s
Aug 11 12:31:39.638: INFO: Pod "pod-configmaps-40dddc2e-6a76-4fa6-87b2-c48e1ed95fba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017944683s
STEP: Saw pod success
Aug 11 12:31:39.638: INFO: Pod "pod-configmaps-40dddc2e-6a76-4fa6-87b2-c48e1ed95fba" satisfied condition "Succeeded or Failed"
Aug 11 12:31:39.640: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-40dddc2e-6a76-4fa6-87b2-c48e1ed95fba container env-test: 
STEP: delete the pod
Aug 11 12:31:39.820: INFO: Waiting for pod pod-configmaps-40dddc2e-6a76-4fa6-87b2-c48e1ed95fba to disappear
Aug 11 12:31:39.834: INFO: Pod pod-configmaps-40dddc2e-6a76-4fa6-87b2-c48e1ed95fba no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:31:39.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9662" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":120,"skipped":1874,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:31:39.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-c4adc24f-d672-4bdb-b76e-973eceac07cf
STEP: Creating a pod to test consume configMaps
Aug 11 12:31:39.914: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-403775e0-cc5d-4c14-befb-f8cba6b179f3" in namespace "projected-9900" to be "Succeeded or Failed"
Aug 11 12:31:39.974: INFO: Pod "pod-projected-configmaps-403775e0-cc5d-4c14-befb-f8cba6b179f3": Phase="Pending", Reason="", readiness=false. Elapsed: 59.969998ms
Aug 11 12:31:41.979: INFO: Pod "pod-projected-configmaps-403775e0-cc5d-4c14-befb-f8cba6b179f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064190284s
Aug 11 12:31:43.983: INFO: Pod "pod-projected-configmaps-403775e0-cc5d-4c14-befb-f8cba6b179f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068665696s
STEP: Saw pod success
Aug 11 12:31:43.983: INFO: Pod "pod-projected-configmaps-403775e0-cc5d-4c14-befb-f8cba6b179f3" satisfied condition "Succeeded or Failed"
Aug 11 12:31:43.986: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-403775e0-cc5d-4c14-befb-f8cba6b179f3 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 11 12:31:44.009: INFO: Waiting for pod pod-projected-configmaps-403775e0-cc5d-4c14-befb-f8cba6b179f3 to disappear
Aug 11 12:31:44.094: INFO: Pod pod-projected-configmaps-403775e0-cc5d-4c14-befb-f8cba6b179f3 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:31:44.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9900" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":121,"skipped":1882,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:31:44.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-xmjd4 in namespace proxy-2797
I0811 12:31:44.281120       7 runners.go:190] Created replication controller with name: proxy-service-xmjd4, namespace: proxy-2797, replica count: 1
I0811 12:31:45.331557       7 runners.go:190] proxy-service-xmjd4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0811 12:31:46.331795       7 runners.go:190] proxy-service-xmjd4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0811 12:31:47.332054       7 runners.go:190] proxy-service-xmjd4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0811 12:31:48.332281       7 runners.go:190] proxy-service-xmjd4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0811 12:31:49.332517       7 runners.go:190] proxy-service-xmjd4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0811 12:31:50.332867       7 runners.go:190] proxy-service-xmjd4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0811 12:31:51.333137       7 runners.go:190] proxy-service-xmjd4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0811 12:31:52.333372       7 runners.go:190] proxy-service-xmjd4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0811 12:31:53.333658       7 runners.go:190] proxy-service-xmjd4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0811 12:31:54.333925       7 runners.go:190] proxy-service-xmjd4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0811 12:31:55.334153       7 runners.go:190] proxy-service-xmjd4 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 11 12:31:55.337: INFO: setup took 11.16465327s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Aug 11 12:31:55.342: INFO: (0) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp/proxy/: test (200; 5.451562ms)
Aug 11 12:31:55.342: INFO: (0) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:162/proxy/: bar (200; 5.269219ms)
Aug 11 12:31:55.348: INFO: (0) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:160/proxy/: foo (200; 10.822935ms)
Aug 11 12:31:55.348: INFO: (0) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:160/proxy/: foo (200; 10.948028ms)
Aug 11 12:31:55.348: INFO: (0) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:162/proxy/: bar (200; 10.913009ms)
Aug 11 12:31:55.349: INFO: (0) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:1080/proxy/: ... (200; 11.720585ms)
Aug 11 12:31:55.349: INFO: (0) /api/v1/namespaces/proxy-2797/services/http:proxy-service-xmjd4:portname2/proxy/: bar (200; 11.895995ms)
Aug 11 12:31:55.349: INFO: (0) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:1080/proxy/: test<... (200; 11.857597ms)
Aug 11 12:31:55.349: INFO: (0) /api/v1/namespaces/proxy-2797/services/http:proxy-service-xmjd4:portname1/proxy/: foo (200; 11.819792ms)
Aug 11 12:31:55.349: INFO: (0) /api/v1/namespaces/proxy-2797/services/proxy-service-xmjd4:portname1/proxy/: foo (200; 12.000184ms)
Aug 11 12:31:55.350: INFO: (0) /api/v1/namespaces/proxy-2797/services/proxy-service-xmjd4:portname2/proxy/: bar (200; 13.538331ms)
Aug 11 12:31:55.354: INFO: (0) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:462/proxy/: tls qux (200; 17.638816ms)
Aug 11 12:31:55.355: INFO: (0) /api/v1/namespaces/proxy-2797/services/https:proxy-service-xmjd4:tlsportname1/proxy/: tls baz (200; 17.484808ms)
Aug 11 12:31:55.355: INFO: (0) /api/v1/namespaces/proxy-2797/services/https:proxy-service-xmjd4:tlsportname2/proxy/: tls qux (200; 17.518668ms)
Aug 11 12:31:55.355: INFO: (0) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:460/proxy/: tls baz (200; 17.571957ms)
Aug 11 12:31:55.358: INFO: (0) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:443/proxy/: test<... (200; 23.515965ms)
Aug 11 12:31:55.382: INFO: (1) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:162/proxy/: bar (200; 23.76896ms)
Aug 11 12:31:55.383: INFO: (1) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:462/proxy/: tls qux (200; 24.202984ms)
Aug 11 12:31:55.383: INFO: (1) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:160/proxy/: foo (200; 24.226827ms)
Aug 11 12:31:55.383: INFO: (1) /api/v1/namespaces/proxy-2797/services/http:proxy-service-xmjd4:portname2/proxy/: bar (200; 24.823167ms)
Aug 11 12:31:55.383: INFO: (1) /api/v1/namespaces/proxy-2797/services/http:proxy-service-xmjd4:portname1/proxy/: foo (200; 24.876409ms)
Aug 11 12:31:55.383: INFO: (1) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp/proxy/: test (200; 24.961896ms)
Aug 11 12:31:55.383: INFO: (1) /api/v1/namespaces/proxy-2797/services/https:proxy-service-xmjd4:tlsportname1/proxy/: tls baz (200; 25.074799ms)
Aug 11 12:31:55.384: INFO: (1) /api/v1/namespaces/proxy-2797/services/https:proxy-service-xmjd4:tlsportname2/proxy/: tls qux (200; 25.125064ms)
Aug 11 12:31:55.384: INFO: (1) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:460/proxy/: tls baz (200; 25.177862ms)
Aug 11 12:31:55.384: INFO: (1) /api/v1/namespaces/proxy-2797/services/proxy-service-xmjd4:portname2/proxy/: bar (200; 25.342243ms)
Aug 11 12:31:55.384: INFO: (1) /api/v1/namespaces/proxy-2797/services/proxy-service-xmjd4:portname1/proxy/: foo (200; 25.600762ms)
Aug 11 12:31:55.385: INFO: (1) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:1080/proxy/: ... (200; 26.204205ms)
Aug 11 12:31:55.385: INFO: (1) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:162/proxy/: bar (200; 26.397337ms)
Aug 11 12:31:55.385: INFO: (1) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:443/proxy/: test (200; 5.060549ms)
Aug 11 12:31:55.390: INFO: (2) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:460/proxy/: tls baz (200; 4.984463ms)
Aug 11 12:31:55.390: INFO: (2) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:1080/proxy/: ... (200; 4.926099ms)
Aug 11 12:31:55.390: INFO: (2) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:1080/proxy/: test<... (200; 5.068259ms)
Aug 11 12:31:55.390: INFO: (2) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:443/proxy/: test (200; 4.662681ms)
Aug 11 12:31:55.397: INFO: (3) /api/v1/namespaces/proxy-2797/services/https:proxy-service-xmjd4:tlsportname2/proxy/: tls qux (200; 4.610408ms)
Aug 11 12:31:55.397: INFO: (3) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:160/proxy/: foo (200; 4.60949ms)
Aug 11 12:31:55.397: INFO: (3) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:162/proxy/: bar (200; 4.619211ms)
Aug 11 12:31:55.397: INFO: (3) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:160/proxy/: foo (200; 4.66744ms)
Aug 11 12:31:55.397: INFO: (3) /api/v1/namespaces/proxy-2797/services/https:proxy-service-xmjd4:tlsportname1/proxy/: tls baz (200; 4.625363ms)
Aug 11 12:31:55.397: INFO: (3) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:162/proxy/: bar (200; 4.642362ms)
Aug 11 12:31:55.397: INFO: (3) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:1080/proxy/: test<... (200; 4.674271ms)
Aug 11 12:31:55.397: INFO: (3) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:460/proxy/: tls baz (200; 5.118807ms)
Aug 11 12:31:55.397: INFO: (3) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:443/proxy/: ... (200; 5.115656ms)
Aug 11 12:31:55.398: INFO: (3) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:462/proxy/: tls qux (200; 5.306847ms)
Aug 11 12:31:55.401: INFO: (4) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:462/proxy/: tls qux (200; 2.735224ms)
Aug 11 12:31:55.401: INFO: (4) /api/v1/namespaces/proxy-2797/services/http:proxy-service-xmjd4:portname2/proxy/: bar (200; 3.471175ms)
Aug 11 12:31:55.401: INFO: (4) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:160/proxy/: foo (200; 3.407023ms)
Aug 11 12:31:55.401: INFO: (4) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:1080/proxy/: test<... (200; 3.443979ms)
Aug 11 12:31:55.402: INFO: (4) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:162/proxy/: bar (200; 3.878575ms)
Aug 11 12:31:55.402: INFO: (4) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:162/proxy/: bar (200; 3.946096ms)
Aug 11 12:31:55.402: INFO: (4) /api/v1/namespaces/proxy-2797/services/http:proxy-service-xmjd4:portname1/proxy/: foo (200; 4.414794ms)
Aug 11 12:31:55.402: INFO: (4) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp/proxy/: test (200; 4.411472ms)
Aug 11 12:31:55.402: INFO: (4) /api/v1/namespaces/proxy-2797/services/proxy-service-xmjd4:portname2/proxy/: bar (200; 4.542213ms)
Aug 11 12:31:55.403: INFO: (4) /api/v1/namespaces/proxy-2797/services/https:proxy-service-xmjd4:tlsportname2/proxy/: tls qux (200; 4.933782ms)
Aug 11 12:31:55.403: INFO: (4) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:460/proxy/: tls baz (200; 5.001085ms)
Aug 11 12:31:55.403: INFO: (4) /api/v1/namespaces/proxy-2797/services/proxy-service-xmjd4:portname1/proxy/: foo (200; 4.947129ms)
Aug 11 12:31:55.403: INFO: (4) /api/v1/namespaces/proxy-2797/services/https:proxy-service-xmjd4:tlsportname1/proxy/: tls baz (200; 4.941214ms)
Aug 11 12:31:55.403: INFO: (4) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:443/proxy/: ... (200; 5.060787ms)
Aug 11 12:31:55.406: INFO: (5) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:460/proxy/: tls baz (200; 3.068621ms)
Aug 11 12:31:55.406: INFO: (5) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:160/proxy/: foo (200; 3.215788ms)
Aug 11 12:31:55.407: INFO: (5) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:443/proxy/: test (200; 3.649856ms)
Aug 11 12:31:55.407: INFO: (5) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:462/proxy/: tls qux (200; 3.964381ms)
Aug 11 12:31:55.407: INFO: (5) /api/v1/namespaces/proxy-2797/services/proxy-service-xmjd4:portname2/proxy/: bar (200; 4.251026ms)
Aug 11 12:31:55.407: INFO: (5) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:1080/proxy/: test<... (200; 4.296188ms)
Aug 11 12:31:55.407: INFO: (5) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:160/proxy/: foo (200; 4.372986ms)
Aug 11 12:31:55.408: INFO: (5) /api/v1/namespaces/proxy-2797/services/proxy-service-xmjd4:portname1/proxy/: foo (200; 4.753541ms)
Aug 11 12:31:55.408: INFO: (5) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:1080/proxy/: ... (200; 4.706853ms)
Aug 11 12:31:55.408: INFO: (5) /api/v1/namespaces/proxy-2797/services/http:proxy-service-xmjd4:portname1/proxy/: foo (200; 4.760808ms)
Aug 11 12:31:55.408: INFO: (5) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:162/proxy/: bar (200; 4.746925ms)
Aug 11 12:31:55.408: INFO: (5) /api/v1/namespaces/proxy-2797/services/https:proxy-service-xmjd4:tlsportname1/proxy/: tls baz (200; 4.96351ms)
Aug 11 12:31:55.408: INFO: (5) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:162/proxy/: bar (200; 4.998305ms)
Aug 11 12:31:55.408: INFO: (5) /api/v1/namespaces/proxy-2797/services/https:proxy-service-xmjd4:tlsportname2/proxy/: tls qux (200; 5.144979ms)
Aug 11 12:31:55.408: INFO: (5) /api/v1/namespaces/proxy-2797/services/http:proxy-service-xmjd4:portname2/proxy/: bar (200; 5.248354ms)
Aug 11 12:31:55.411: INFO: (6) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:443/proxy/: test (200; 6.813294ms)
Aug 11 12:31:55.415: INFO: (6) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:1080/proxy/: ... (200; 6.852405ms)
Aug 11 12:31:55.415: INFO: (6) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:162/proxy/: bar (200; 6.810932ms)
Aug 11 12:31:55.415: INFO: (6) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:462/proxy/: tls qux (200; 6.885514ms)
Aug 11 12:31:55.415: INFO: (6) /api/v1/namespaces/proxy-2797/services/http:proxy-service-xmjd4:portname2/proxy/: bar (200; 6.865011ms)
Aug 11 12:31:55.415: INFO: (6) /api/v1/namespaces/proxy-2797/services/proxy-service-xmjd4:portname1/proxy/: foo (200; 7.089192ms)
Aug 11 12:31:55.416: INFO: (6) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:1080/proxy/: test<... (200; 7.328714ms)
Aug 11 12:31:55.417: INFO: (6) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:160/proxy/: foo (200; 8.731659ms)
Aug 11 12:31:55.417: INFO: (6) /api/v1/namespaces/proxy-2797/services/https:proxy-service-xmjd4:tlsportname1/proxy/: tls baz (200; 8.855882ms)
Aug 11 12:31:55.417: INFO: (6) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:460/proxy/: tls baz (200; 8.892325ms)
Aug 11 12:31:55.418: INFO: (6) /api/v1/namespaces/proxy-2797/services/proxy-service-xmjd4:portname2/proxy/: bar (200; 9.327775ms)
Aug 11 12:31:55.418: INFO: (6) /api/v1/namespaces/proxy-2797/services/https:proxy-service-xmjd4:tlsportname2/proxy/: tls qux (200; 9.372125ms)
Aug 11 12:31:55.422: INFO: (7) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:460/proxy/: tls baz (200; 4.192319ms)
Aug 11 12:31:55.423: INFO: (7) /api/v1/namespaces/proxy-2797/services/https:proxy-service-xmjd4:tlsportname1/proxy/: tls baz (200; 4.727733ms)
Aug 11 12:31:55.423: INFO: (7) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp/proxy/: test (200; 4.728703ms)
Aug 11 12:31:55.423: INFO: (7) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:162/proxy/: bar (200; 4.755046ms)
Aug 11 12:31:55.423: INFO: (7) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:462/proxy/: tls qux (200; 4.954115ms)
Aug 11 12:31:55.423: INFO: (7) /api/v1/namespaces/proxy-2797/services/https:proxy-service-xmjd4:tlsportname2/proxy/: tls qux (200; 4.989552ms)
Aug 11 12:31:55.423: INFO: (7) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:162/proxy/: bar (200; 4.958502ms)
Aug 11 12:31:55.423: INFO: (7) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:160/proxy/: foo (200; 5.038537ms)
Aug 11 12:31:55.423: INFO: (7) /api/v1/namespaces/proxy-2797/services/http:proxy-service-xmjd4:portname1/proxy/: foo (200; 5.078034ms)
Aug 11 12:31:55.423: INFO: (7) /api/v1/namespaces/proxy-2797/services/proxy-service-xmjd4:portname2/proxy/: bar (200; 5.137395ms)
Aug 11 12:31:55.423: INFO: (7) /api/v1/namespaces/proxy-2797/services/http:proxy-service-xmjd4:portname2/proxy/: bar (200; 5.069769ms)
Aug 11 12:31:55.423: INFO: (7) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:160/proxy/: foo (200; 5.454203ms)
Aug 11 12:31:55.423: INFO: (7) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:1080/proxy/: ... (200; 5.463062ms)
Aug 11 12:31:55.423: INFO: (7) /api/v1/namespaces/proxy-2797/services/proxy-service-xmjd4:portname1/proxy/: foo (200; 5.427586ms)
Aug 11 12:31:55.423: INFO: (7) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:443/proxy/: test<... (200; 5.665708ms)
Aug 11 12:31:55.427: INFO: (8) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:160/proxy/: foo (200; 3.272165ms)
Aug 11 12:31:55.427: INFO: (8) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:443/proxy/: test (200; 4.52652ms)
Aug 11 12:31:55.428: INFO: (8) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:160/proxy/: foo (200; 4.581849ms)
Aug 11 12:31:55.428: INFO: (8) /api/v1/namespaces/proxy-2797/services/https:proxy-service-xmjd4:tlsportname2/proxy/: tls qux (200; 4.683232ms)
Aug 11 12:31:55.428: INFO: (8) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:1080/proxy/: test<... (200; 4.683114ms)
Aug 11 12:31:55.428: INFO: (8) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:1080/proxy/: ... (200; 4.996993ms)
Aug 11 12:31:55.428: INFO: (8) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:162/proxy/: bar (200; 5.027962ms)
Aug 11 12:31:55.429: INFO: (8) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:162/proxy/: bar (200; 5.107119ms)
Aug 11 12:31:55.429: INFO: (8) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:460/proxy/: tls baz (200; 5.086332ms)
Aug 11 12:31:55.432: INFO: (9) /api/v1/namespaces/proxy-2797/services/proxy-service-xmjd4:portname2/proxy/: bar (200; 3.123461ms)
Aug 11 12:31:55.433: INFO: (9) /api/v1/namespaces/proxy-2797/services/https:proxy-service-xmjd4:tlsportname2/proxy/: tls qux (200; 4.532737ms)
Aug 11 12:31:55.433: INFO: (9) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:460/proxy/: tls baz (200; 4.607277ms)
Aug 11 12:31:55.433: INFO: (9) /api/v1/namespaces/proxy-2797/services/https:proxy-service-xmjd4:tlsportname1/proxy/: tls baz (200; 4.632449ms)
Aug 11 12:31:55.433: INFO: (9) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:160/proxy/: foo (200; 4.580591ms)
Aug 11 12:31:55.433: INFO: (9) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:1080/proxy/: ... (200; 4.627099ms)
Aug 11 12:31:55.433: INFO: (9) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:443/proxy/: test (200; 4.67797ms)
Aug 11 12:31:55.433: INFO: (9) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:1080/proxy/: test<... (200; 4.737047ms)
Aug 11 12:31:55.433: INFO: (9) /api/v1/namespaces/proxy-2797/services/http:proxy-service-xmjd4:portname1/proxy/: foo (200; 4.697259ms)
Aug 11 12:31:55.433: INFO: (9) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:160/proxy/: foo (200; 4.791299ms)
Aug 11 12:31:55.433: INFO: (9) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:162/proxy/: bar (200; 4.723208ms)
Aug 11 12:31:55.436: INFO: (10) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:460/proxy/: tls baz (200; 2.358163ms)
Aug 11 12:31:55.437: INFO: (10) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:162/proxy/: bar (200; 3.712202ms)
Aug 11 12:31:55.437: INFO: (10) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:1080/proxy/: ... (200; 3.909465ms)
Aug 11 12:31:55.437: INFO: (10) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp/proxy/: test (200; 3.824991ms)
Aug 11 12:31:55.437: INFO: (10) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:443/proxy/: test<... (200; 4.899152ms)
Aug 11 12:31:55.438: INFO: (10) /api/v1/namespaces/proxy-2797/services/https:proxy-service-xmjd4:tlsportname2/proxy/: tls qux (200; 4.883417ms)
Aug 11 12:31:55.442: INFO: (11) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:162/proxy/: bar (200; 3.329138ms)
Aug 11 12:31:55.442: INFO: (11) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:160/proxy/: foo (200; 3.340525ms)
Aug 11 12:31:55.442: INFO: (11) /api/v1/namespaces/proxy-2797/services/https:proxy-service-xmjd4:tlsportname2/proxy/: tls qux (200; 3.406269ms)
Aug 11 12:31:55.442: INFO: (11) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:460/proxy/: tls baz (200; 3.652982ms)
Aug 11 12:31:55.442: INFO: (11) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:443/proxy/: ... (200; 4.497448ms)
Aug 11 12:31:55.443: INFO: (11) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:160/proxy/: foo (200; 4.744958ms)
Aug 11 12:31:55.443: INFO: (11) /api/v1/namespaces/proxy-2797/services/http:proxy-service-xmjd4:portname2/proxy/: bar (200; 4.918404ms)
Aug 11 12:31:55.443: INFO: (11) /api/v1/namespaces/proxy-2797/services/https:proxy-service-xmjd4:tlsportname1/proxy/: tls baz (200; 5.002816ms)
Aug 11 12:31:55.443: INFO: (11) /api/v1/namespaces/proxy-2797/services/proxy-service-xmjd4:portname1/proxy/: foo (200; 4.988485ms)
Aug 11 12:31:55.443: INFO: (11) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp/proxy/: test (200; 4.99896ms)
Aug 11 12:31:55.444: INFO: (11) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:1080/proxy/: test<... (200; 5.050446ms)
Aug 11 12:31:55.444: INFO: (11) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:462/proxy/: tls qux (200; 5.084033ms)
Aug 11 12:31:55.449: INFO: (12) /api/v1/namespaces/proxy-2797/services/proxy-service-xmjd4:portname1/proxy/: foo (200; 5.495137ms)
Aug 11 12:31:55.449: INFO: (12) /api/v1/namespaces/proxy-2797/services/http:proxy-service-xmjd4:portname1/proxy/: foo (200; 5.621355ms)
Aug 11 12:31:55.449: INFO: (12) /api/v1/namespaces/proxy-2797/services/proxy-service-xmjd4:portname2/proxy/: bar (200; 5.57761ms)
Aug 11 12:31:55.449: INFO: (12) /api/v1/namespaces/proxy-2797/services/https:proxy-service-xmjd4:tlsportname1/proxy/: tls baz (200; 5.594041ms)
Aug 11 12:31:55.449: INFO: (12) /api/v1/namespaces/proxy-2797/services/https:proxy-service-xmjd4:tlsportname2/proxy/: tls qux (200; 5.565717ms)
Aug 11 12:31:55.449: INFO: (12) /api/v1/namespaces/proxy-2797/services/http:proxy-service-xmjd4:portname2/proxy/: bar (200; 5.868906ms)
Aug 11 12:31:55.450: INFO: (12) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:1080/proxy/: ... (200; 5.960713ms)
Aug 11 12:31:55.450: INFO: (12) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:160/proxy/: foo (200; 5.994644ms)
Aug 11 12:31:55.450: INFO: (12) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:160/proxy/: foo (200; 6.070908ms)
Aug 11 12:31:55.450: INFO: (12) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:462/proxy/: tls qux (200; 6.074915ms)
Aug 11 12:31:55.450: INFO: (12) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:162/proxy/: bar (200; 6.076373ms)
Aug 11 12:31:55.450: INFO: (12) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:443/proxy/: test (200; 6.188188ms)
Aug 11 12:31:55.450: INFO: (12) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:1080/proxy/: test<... (200; 6.163047ms)
Aug 11 12:31:55.450: INFO: (12) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:162/proxy/: bar (200; 6.201843ms)
Aug 11 12:31:55.450: INFO: (12) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:460/proxy/: tls baz (200; 6.153682ms)
Aug 11 12:31:55.454: INFO: (13) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:162/proxy/: bar (200; 4.089807ms)
Aug 11 12:31:55.454: INFO: (13) /api/v1/namespaces/proxy-2797/services/https:proxy-service-xmjd4:tlsportname2/proxy/: tls qux (200; 4.188189ms)
Aug 11 12:31:55.455: INFO: (13) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:460/proxy/: tls baz (200; 4.893652ms)
Aug 11 12:31:55.455: INFO: (13) /api/v1/namespaces/proxy-2797/services/proxy-service-xmjd4:portname1/proxy/: foo (200; 4.984205ms)
Aug 11 12:31:55.455: INFO: (13) /api/v1/namespaces/proxy-2797/services/http:proxy-service-xmjd4:portname2/proxy/: bar (200; 5.045625ms)
Aug 11 12:31:55.455: INFO: (13) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:162/proxy/: bar (200; 5.086921ms)
Aug 11 12:31:55.455: INFO: (13) /api/v1/namespaces/proxy-2797/services/proxy-service-xmjd4:portname2/proxy/: bar (200; 4.98186ms)
Aug 11 12:31:55.455: INFO: (13) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:443/proxy/: test (200; 5.31694ms)
Aug 11 12:31:55.455: INFO: (13) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:1080/proxy/: test<... (200; 5.305839ms)
Aug 11 12:31:55.455: INFO: (13) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:1080/proxy/: ... (200; 5.270678ms)
Aug 11 12:31:55.460: INFO: (14) /api/v1/namespaces/proxy-2797/services/proxy-service-xmjd4:portname1/proxy/: foo (200; 4.587501ms)
Aug 11 12:31:55.460: INFO: (14) /api/v1/namespaces/proxy-2797/services/http:proxy-service-xmjd4:portname2/proxy/: bar (200; 4.604534ms)
Aug 11 12:31:55.461: INFO: (14) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:162/proxy/: bar (200; 5.406706ms)
Aug 11 12:31:55.461: INFO: (14) /api/v1/namespaces/proxy-2797/services/proxy-service-xmjd4:portname2/proxy/: bar (200; 5.456722ms)
Aug 11 12:31:55.461: INFO: (14) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:1080/proxy/: test<... (200; 5.37452ms)
Aug 11 12:31:55.461: INFO: (14) /api/v1/namespaces/proxy-2797/services/https:proxy-service-xmjd4:tlsportname2/proxy/: tls qux (200; 5.403957ms)
Aug 11 12:31:55.461: INFO: (14) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:1080/proxy/: ... (200; 5.495325ms)
Aug 11 12:31:55.461: INFO: (14) /api/v1/namespaces/proxy-2797/services/http:proxy-service-xmjd4:portname1/proxy/: foo (200; 5.373293ms)
Aug 11 12:31:55.461: INFO: (14) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp/proxy/: test (200; 5.394681ms)
Aug 11 12:31:55.461: INFO: (14) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:460/proxy/: tls baz (200; 5.463543ms)
Aug 11 12:31:55.461: INFO: (14) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:160/proxy/: foo (200; 5.523202ms)
Aug 11 12:31:55.461: INFO: (14) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:462/proxy/: tls qux (200; 5.443176ms)
Aug 11 12:31:55.461: INFO: (14) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:160/proxy/: foo (200; 5.484912ms)
Aug 11 12:31:55.461: INFO: (14) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:162/proxy/: bar (200; 5.487441ms)
Aug 11 12:31:55.461: INFO: (14) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:443/proxy/: test<... (200; 3.939517ms)
Aug 11 12:31:55.466: INFO: (15) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:162/proxy/: bar (200; 3.966271ms)
Aug 11 12:31:55.466: INFO: (15) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:1080/proxy/: ... (200; 4.066388ms)
Aug 11 12:31:55.466: INFO: (15) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:160/proxy/: foo (200; 4.059174ms)
Aug 11 12:31:55.467: INFO: (15) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:462/proxy/: tls qux (200; 4.287632ms)
Aug 11 12:31:55.467: INFO: (15) /api/v1/namespaces/proxy-2797/services/proxy-service-xmjd4:portname2/proxy/: bar (200; 4.537914ms)
Aug 11 12:31:55.467: INFO: (15) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp/proxy/: test (200; 4.569181ms)
Aug 11 12:31:55.467: INFO: (15) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:162/proxy/: bar (200; 4.486286ms)
Aug 11 12:31:55.467: INFO: (15) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:443/proxy/: test<... (200; 3.657071ms)
Aug 11 12:31:55.472: INFO: (16) /api/v1/namespaces/proxy-2797/services/https:proxy-service-xmjd4:tlsportname1/proxy/: tls baz (200; 3.881479ms)
Aug 11 12:31:55.472: INFO: (16) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:162/proxy/: bar (200; 4.041489ms)
Aug 11 12:31:55.472: INFO: (16) /api/v1/namespaces/proxy-2797/services/proxy-service-xmjd4:portname1/proxy/: foo (200; 4.234841ms)
Aug 11 12:31:55.473: INFO: (16) /api/v1/namespaces/proxy-2797/services/http:proxy-service-xmjd4:portname1/proxy/: foo (200; 4.68797ms)
Aug 11 12:31:55.473: INFO: (16) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:1080/proxy/: ... (200; 4.651332ms)
Aug 11 12:31:55.473: INFO: (16) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:160/proxy/: foo (200; 4.810765ms)
Aug 11 12:31:55.473: INFO: (16) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp/proxy/: test (200; 5.036337ms)
Aug 11 12:31:55.473: INFO: (16) /api/v1/namespaces/proxy-2797/services/proxy-service-xmjd4:portname2/proxy/: bar (200; 5.046595ms)
Aug 11 12:31:55.473: INFO: (16) /api/v1/namespaces/proxy-2797/services/https:proxy-service-xmjd4:tlsportname2/proxy/: tls qux (200; 5.092255ms)
Aug 11 12:31:55.473: INFO: (16) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:443/proxy/: test (200; 3.107319ms)
Aug 11 12:31:55.477: INFO: (17) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:162/proxy/: bar (200; 3.819029ms)
Aug 11 12:31:55.477: INFO: (17) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:1080/proxy/: test<... (200; 3.849714ms)
Aug 11 12:31:55.477: INFO: (17) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:162/proxy/: bar (200; 3.92091ms)
Aug 11 12:31:55.477: INFO: (17) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:160/proxy/: foo (200; 3.889162ms)
Aug 11 12:31:55.477: INFO: (17) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:160/proxy/: foo (200; 3.937835ms)
Aug 11 12:31:55.477: INFO: (17) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:1080/proxy/: ... (200; 3.943199ms)
Aug 11 12:31:55.477: INFO: (17) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:460/proxy/: tls baz (200; 3.971249ms)
Aug 11 12:31:55.477: INFO: (17) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:443/proxy/: test (200; 2.685606ms)
Aug 11 12:31:55.482: INFO: (18) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:162/proxy/: bar (200; 3.384972ms)
Aug 11 12:31:55.482: INFO: (18) /api/v1/namespaces/proxy-2797/services/http:proxy-service-xmjd4:portname1/proxy/: foo (200; 3.792114ms)
Aug 11 12:31:55.482: INFO: (18) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:160/proxy/: foo (200; 3.716849ms)
Aug 11 12:31:55.482: INFO: (18) /api/v1/namespaces/proxy-2797/services/proxy-service-xmjd4:portname2/proxy/: bar (200; 3.941326ms)
Aug 11 12:31:55.482: INFO: (18) /api/v1/namespaces/proxy-2797/services/proxy-service-xmjd4:portname1/proxy/: foo (200; 4.067368ms)
Aug 11 12:31:55.482: INFO: (18) /api/v1/namespaces/proxy-2797/services/http:proxy-service-xmjd4:portname2/proxy/: bar (200; 4.106575ms)
Aug 11 12:31:55.482: INFO: (18) /api/v1/namespaces/proxy-2797/services/https:proxy-service-xmjd4:tlsportname2/proxy/: tls qux (200; 4.163973ms)
Aug 11 12:31:55.483: INFO: (18) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:160/proxy/: foo (200; 4.434643ms)
Aug 11 12:31:55.483: INFO: (18) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:462/proxy/: tls qux (200; 4.417982ms)
Aug 11 12:31:55.483: INFO: (18) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:162/proxy/: bar (200; 4.58209ms)
Aug 11 12:31:55.483: INFO: (18) /api/v1/namespaces/proxy-2797/services/https:proxy-service-xmjd4:tlsportname1/proxy/: tls baz (200; 4.53066ms)
Aug 11 12:31:55.483: INFO: (18) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:1080/proxy/: ... (200; 4.624467ms)
Aug 11 12:31:55.483: INFO: (18) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:460/proxy/: tls baz (200; 4.771364ms)
Aug 11 12:31:55.483: INFO: (18) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:443/proxy/: test<... (200; 4.883782ms)
Aug 11 12:31:55.487: INFO: (19) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:162/proxy/: bar (200; 3.842323ms)
Aug 11 12:31:55.487: INFO: (19) /api/v1/namespaces/proxy-2797/services/proxy-service-xmjd4:portname1/proxy/: foo (200; 4.007947ms)
Aug 11 12:31:55.487: INFO: (19) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:162/proxy/: bar (200; 4.087271ms)
Aug 11 12:31:55.487: INFO: (19) /api/v1/namespaces/proxy-2797/services/http:proxy-service-xmjd4:portname1/proxy/: foo (200; 4.149515ms)
Aug 11 12:31:55.487: INFO: (19) /api/v1/namespaces/proxy-2797/services/proxy-service-xmjd4:portname2/proxy/: bar (200; 4.30349ms)
Aug 11 12:31:55.488: INFO: (19) /api/v1/namespaces/proxy-2797/services/http:proxy-service-xmjd4:portname2/proxy/: bar (200; 4.527505ms)
Aug 11 12:31:55.488: INFO: (19) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:462/proxy/: tls qux (200; 4.585466ms)
Aug 11 12:31:55.488: INFO: (19) /api/v1/namespaces/proxy-2797/services/https:proxy-service-xmjd4:tlsportname2/proxy/: tls qux (200; 4.718366ms)
Aug 11 12:31:55.488: INFO: (19) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp:160/proxy/: foo (200; 4.877823ms)
Aug 11 12:31:55.488: INFO: (19) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:160/proxy/: foo (200; 4.963838ms)
Aug 11 12:31:55.488: INFO: (19) /api/v1/namespaces/proxy-2797/services/https:proxy-service-xmjd4:tlsportname1/proxy/: tls baz (200; 4.999791ms)
Aug 11 12:31:55.488: INFO: (19) /api/v1/namespaces/proxy-2797/pods/http:proxy-service-xmjd4-h26zp:1080/proxy/: ... (200; 5.042339ms)
Aug 11 12:31:55.488: INFO: (19) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:460/proxy/: tls baz (200; 5.201222ms)
Aug 11 12:31:55.488: INFO: (19) /api/v1/namespaces/proxy-2797/pods/https:proxy-service-xmjd4-h26zp:443/proxy/: test<... (200; 5.283477ms)
Aug 11 12:31:55.488: INFO: (19) /api/v1/namespaces/proxy-2797/pods/proxy-service-xmjd4-h26zp/proxy/: test (200; 5.318754ms)
STEP: deleting ReplicationController proxy-service-xmjd4 in namespace proxy-2797, will wait for the garbage collector to delete the pods
Aug 11 12:31:55.546: INFO: Deleting ReplicationController proxy-service-xmjd4 took: 5.724262ms
Aug 11 12:31:55.846: INFO: Terminating ReplicationController proxy-service-xmjd4 pods took: 300.259595ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:32:03.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-2797" for this suite.

• [SLOW TEST:19.756 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":275,"completed":122,"skipped":1893,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:32:03.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-65533eb7-11f2-4aea-b15f-1dc2b3e5df38
STEP: Creating a pod to test consume configMaps
Aug 11 12:32:04.774: INFO: Waiting up to 5m0s for pod "pod-configmaps-1a3a7d71-cdbb-4823-88b5-32fb813ce326" in namespace "configmap-8176" to be "Succeeded or Failed"
Aug 11 12:32:04.904: INFO: Pod "pod-configmaps-1a3a7d71-cdbb-4823-88b5-32fb813ce326": Phase="Pending", Reason="", readiness=false. Elapsed: 129.139513ms
Aug 11 12:32:06.907: INFO: Pod "pod-configmaps-1a3a7d71-cdbb-4823-88b5-32fb813ce326": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132398748s
Aug 11 12:32:08.955: INFO: Pod "pod-configmaps-1a3a7d71-cdbb-4823-88b5-32fb813ce326": Phase="Pending", Reason="", readiness=false. Elapsed: 4.180350239s
Aug 11 12:32:10.959: INFO: Pod "pod-configmaps-1a3a7d71-cdbb-4823-88b5-32fb813ce326": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.184059214s
STEP: Saw pod success
Aug 11 12:32:10.959: INFO: Pod "pod-configmaps-1a3a7d71-cdbb-4823-88b5-32fb813ce326" satisfied condition "Succeeded or Failed"
Aug 11 12:32:10.961: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-1a3a7d71-cdbb-4823-88b5-32fb813ce326 container configmap-volume-test: 
STEP: delete the pod
Aug 11 12:32:10.998: INFO: Waiting for pod pod-configmaps-1a3a7d71-cdbb-4823-88b5-32fb813ce326 to disappear
Aug 11 12:32:11.002: INFO: Pod pod-configmaps-1a3a7d71-cdbb-4823-88b5-32fb813ce326 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:32:11.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8176" for this suite.

• [SLOW TEST:7.150 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":123,"skipped":1901,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:32:11.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:32:27.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8801" for this suite.

• [SLOW TEST:16.699 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":124,"skipped":1965,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:32:27.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Aug 11 12:32:27.792: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8974 /api/v1/namespaces/watch-8974/configmaps/e2e-watch-test-configmap-a a08f333a-5052-4ecd-94c7-d5f9376363e7 8562876 0 2020-08-11 12:32:27 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-11 12:32:27 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 11 12:32:27.792: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8974 /api/v1/namespaces/watch-8974/configmaps/e2e-watch-test-configmap-a a08f333a-5052-4ecd-94c7-d5f9376363e7 8562876 0 2020-08-11 12:32:27 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-11 12:32:27 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Aug 11 12:32:37.806: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8974 /api/v1/namespaces/watch-8974/configmaps/e2e-watch-test-configmap-a a08f333a-5052-4ecd-94c7-d5f9376363e7 8562925 0 2020-08-11 12:32:27 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-11 12:32:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 11 12:32:37.806: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8974 /api/v1/namespaces/watch-8974/configmaps/e2e-watch-test-configmap-a a08f333a-5052-4ecd-94c7-d5f9376363e7 8562925 0 2020-08-11 12:32:27 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-11 12:32:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Aug 11 12:32:47.815: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8974 /api/v1/namespaces/watch-8974/configmaps/e2e-watch-test-configmap-a a08f333a-5052-4ecd-94c7-d5f9376363e7 8562955 0 2020-08-11 12:32:27 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-11 12:32:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 11 12:32:47.815: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8974 /api/v1/namespaces/watch-8974/configmaps/e2e-watch-test-configmap-a a08f333a-5052-4ecd-94c7-d5f9376363e7 8562955 0 2020-08-11 12:32:27 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-11 12:32:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Aug 11 12:32:57.822: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8974 /api/v1/namespaces/watch-8974/configmaps/e2e-watch-test-configmap-a a08f333a-5052-4ecd-94c7-d5f9376363e7 8562985 0 2020-08-11 12:32:27 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-11 12:32:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 11 12:32:57.822: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8974 /api/v1/namespaces/watch-8974/configmaps/e2e-watch-test-configmap-a a08f333a-5052-4ecd-94c7-d5f9376363e7 8562985 0 2020-08-11 12:32:27 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-11 12:32:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Aug 11 12:33:07.830: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-8974 /api/v1/namespaces/watch-8974/configmaps/e2e-watch-test-configmap-b 91751990-73ae-44c8-a7ad-92d30a32f07f 8563015 0 2020-08-11 12:33:07 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-08-11 12:33:07 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 11 12:33:07.830: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-8974 /api/v1/namespaces/watch-8974/configmaps/e2e-watch-test-configmap-b 91751990-73ae-44c8-a7ad-92d30a32f07f 8563015 0 2020-08-11 12:33:07 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-08-11 12:33:07 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Aug 11 12:33:17.835: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-8974 /api/v1/namespaces/watch-8974/configmaps/e2e-watch-test-configmap-b 91751990-73ae-44c8-a7ad-92d30a32f07f 8563045 0 2020-08-11 12:33:07 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-08-11 12:33:07 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 11 12:33:17.835: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-8974 /api/v1/namespaces/watch-8974/configmaps/e2e-watch-test-configmap-b 91751990-73ae-44c8-a7ad-92d30a32f07f 8563045 0 2020-08-11 12:33:07 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-08-11 12:33:07 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:33:27.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8974" for this suite.

• [SLOW TEST:60.137 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":125,"skipped":1989,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:33:27.848: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 11 12:33:27.999: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:33:28.030: INFO: Number of nodes with available pods: 0
Aug 11 12:33:28.030: INFO: Node kali-worker is running more than one daemon pod
Aug 11 12:33:29.035: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:33:29.039: INFO: Number of nodes with available pods: 0
Aug 11 12:33:29.039: INFO: Node kali-worker is running more than one daemon pod
Aug 11 12:33:30.066: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:33:30.233: INFO: Number of nodes with available pods: 0
Aug 11 12:33:30.233: INFO: Node kali-worker is running more than one daemon pod
Aug 11 12:33:31.164: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:33:31.168: INFO: Number of nodes with available pods: 0
Aug 11 12:33:31.168: INFO: Node kali-worker is running more than one daemon pod
Aug 11 12:33:32.044: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:33:32.059: INFO: Number of nodes with available pods: 1
Aug 11 12:33:32.059: INFO: Node kali-worker2 is running more than one daemon pod
Aug 11 12:33:33.034: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:33:33.039: INFO: Number of nodes with available pods: 2
Aug 11 12:33:33.039: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Aug 11 12:33:33.111: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:33:33.160: INFO: Number of nodes with available pods: 1
Aug 11 12:33:33.160: INFO: Node kali-worker2 is running more than one daemon pod
Aug 11 12:33:34.164: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:33:34.167: INFO: Number of nodes with available pods: 1
Aug 11 12:33:34.167: INFO: Node kali-worker2 is running more than one daemon pod
Aug 11 12:33:35.165: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:33:35.169: INFO: Number of nodes with available pods: 1
Aug 11 12:33:35.169: INFO: Node kali-worker2 is running more than one daemon pod
Aug 11 12:33:36.203: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:33:36.226: INFO: Number of nodes with available pods: 2
Aug 11 12:33:36.226: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2547, will wait for the garbage collector to delete the pods
Aug 11 12:33:36.290: INFO: Deleting DaemonSet.extensions daemon-set took: 6.502129ms
Aug 11 12:33:36.691: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.237335ms
Aug 11 12:33:43.494: INFO: Number of nodes with available pods: 0
Aug 11 12:33:43.494: INFO: Number of running nodes: 0, number of available pods: 0
Aug 11 12:33:43.496: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2547/daemonsets","resourceVersion":"8563180"},"items":null}

Aug 11 12:33:43.499: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2547/pods","resourceVersion":"8563180"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:33:43.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2547" for this suite.

• [SLOW TEST:15.670 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":126,"skipped":2009,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:33:43.518: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 12:33:43.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 11 12:33:46.558: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3600 create -f -'
Aug 11 12:33:50.167: INFO: stderr: ""
Aug 11 12:33:50.167: INFO: stdout: "e2e-test-crd-publish-openapi-1686-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Aug 11 12:33:50.167: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3600 delete e2e-test-crd-publish-openapi-1686-crds test-cr'
Aug 11 12:33:50.293: INFO: stderr: ""
Aug 11 12:33:50.293: INFO: stdout: "e2e-test-crd-publish-openapi-1686-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Aug 11 12:33:50.293: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3600 apply -f -'
Aug 11 12:33:50.575: INFO: stderr: ""
Aug 11 12:33:50.575: INFO: stdout: "e2e-test-crd-publish-openapi-1686-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Aug 11 12:33:50.575: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3600 delete e2e-test-crd-publish-openapi-1686-crds test-cr'
Aug 11 12:33:50.682: INFO: stderr: ""
Aug 11 12:33:50.682: INFO: stdout: "e2e-test-crd-publish-openapi-1686-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Aug 11 12:33:50.682: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1686-crds'
Aug 11 12:33:50.936: INFO: stderr: ""
Aug 11 12:33:50.937: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1686-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:33:53.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3600" for this suite.

• [SLOW TEST:10.373 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":127,"skipped":2024,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:33:53.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 11 12:33:54.592: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 11 12:33:56.808: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732746034, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732746034, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732746034, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732746034, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 11 12:33:59.852: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:34:00.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3308" for this suite.
STEP: Destroying namespace "webhook-3308-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.415 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":128,"skipped":2061,"failed":0}
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:34:00.307: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-configmap-dqzb
STEP: Creating a pod to test atomic-volume-subpath
Aug 11 12:34:00.785: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dqzb" in namespace "subpath-1678" to be "Succeeded or Failed"
Aug 11 12:34:00.975: INFO: Pod "pod-subpath-test-configmap-dqzb": Phase="Pending", Reason="", readiness=false. Elapsed: 190.216968ms
Aug 11 12:34:02.980: INFO: Pod "pod-subpath-test-configmap-dqzb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194639383s
Aug 11 12:34:04.984: INFO: Pod "pod-subpath-test-configmap-dqzb": Phase="Running", Reason="", readiness=true. Elapsed: 4.198841385s
Aug 11 12:34:06.988: INFO: Pod "pod-subpath-test-configmap-dqzb": Phase="Running", Reason="", readiness=true. Elapsed: 6.202469833s
Aug 11 12:34:08.992: INFO: Pod "pod-subpath-test-configmap-dqzb": Phase="Running", Reason="", readiness=true. Elapsed: 8.207272006s
Aug 11 12:34:10.997: INFO: Pod "pod-subpath-test-configmap-dqzb": Phase="Running", Reason="", readiness=true. Elapsed: 10.211590742s
Aug 11 12:34:13.001: INFO: Pod "pod-subpath-test-configmap-dqzb": Phase="Running", Reason="", readiness=true. Elapsed: 12.216312915s
Aug 11 12:34:15.005: INFO: Pod "pod-subpath-test-configmap-dqzb": Phase="Running", Reason="", readiness=true. Elapsed: 14.220085562s
Aug 11 12:34:17.010: INFO: Pod "pod-subpath-test-configmap-dqzb": Phase="Running", Reason="", readiness=true. Elapsed: 16.224550424s
Aug 11 12:34:19.014: INFO: Pod "pod-subpath-test-configmap-dqzb": Phase="Running", Reason="", readiness=true. Elapsed: 18.229146664s
Aug 11 12:34:21.018: INFO: Pod "pod-subpath-test-configmap-dqzb": Phase="Running", Reason="", readiness=true. Elapsed: 20.232561686s
Aug 11 12:34:23.022: INFO: Pod "pod-subpath-test-configmap-dqzb": Phase="Running", Reason="", readiness=true. Elapsed: 22.23723202s
Aug 11 12:34:25.027: INFO: Pod "pod-subpath-test-configmap-dqzb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.242310402s
STEP: Saw pod success
Aug 11 12:34:25.027: INFO: Pod "pod-subpath-test-configmap-dqzb" satisfied condition "Succeeded or Failed"
Aug 11 12:34:25.030: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-configmap-dqzb container test-container-subpath-configmap-dqzb: 
STEP: delete the pod
Aug 11 12:34:25.292: INFO: Waiting for pod pod-subpath-test-configmap-dqzb to disappear
Aug 11 12:34:25.295: INFO: Pod pod-subpath-test-configmap-dqzb no longer exists
STEP: Deleting pod pod-subpath-test-configmap-dqzb
Aug 11 12:34:25.295: INFO: Deleting pod "pod-subpath-test-configmap-dqzb" in namespace "subpath-1678"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:34:25.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1678" for this suite.

• [SLOW TEST:24.998 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":129,"skipped":2063,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:34:25.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Aug 11 12:34:26.273: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Aug 11 12:34:28.283: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732746066, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732746066, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732746066, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732746066, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 11 12:34:31.351: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 12:34:31.354: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:34:32.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-168" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137

• [SLOW TEST:7.265 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":130,"skipped":2136,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:34:32.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Aug 11 12:34:32.637: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-819 /api/v1/namespaces/watch-819/configmaps/e2e-watch-test-label-changed 22728010-142e-4f29-887d-7bece248fbd2 8563514 0 2020-08-11 12:34:32 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-11 12:34:32 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 11 12:34:32.638: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-819 /api/v1/namespaces/watch-819/configmaps/e2e-watch-test-label-changed 22728010-142e-4f29-887d-7bece248fbd2 8563515 0 2020-08-11 12:34:32 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-11 12:34:32 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 11 12:34:32.638: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-819 /api/v1/namespaces/watch-819/configmaps/e2e-watch-test-label-changed 22728010-142e-4f29-887d-7bece248fbd2 8563516 0 2020-08-11 12:34:32 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-11 12:34:32 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Aug 11 12:34:42.667: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-819 /api/v1/namespaces/watch-819/configmaps/e2e-watch-test-label-changed 22728010-142e-4f29-887d-7bece248fbd2 8563562 0 2020-08-11 12:34:32 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-11 12:34:42 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 11 12:34:42.667: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-819 /api/v1/namespaces/watch-819/configmaps/e2e-watch-test-label-changed 22728010-142e-4f29-887d-7bece248fbd2 8563563 0 2020-08-11 12:34:32 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-11 12:34:42 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 11 12:34:42.667: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-819 /api/v1/namespaces/watch-819/configmaps/e2e-watch-test-label-changed 22728010-142e-4f29-887d-7bece248fbd2 8563564 0 2020-08-11 12:34:32 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-11 12:34:42 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:34:42.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-819" for this suite.

• [SLOW TEST:10.102 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":131,"skipped":2174,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:34:42.675: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: set up a multi version CRD
Aug 11 12:34:42.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:34:57.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7436" for this suite.

• [SLOW TEST:14.670 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":132,"skipped":2181,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:34:57.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir volume type on tmpfs
Aug 11 12:34:57.516: INFO: Waiting up to 5m0s for pod "pod-08ec3b05-da53-45e8-b9a5-a27d0f52fd34" in namespace "emptydir-2917" to be "Succeeded or Failed"
Aug 11 12:34:57.566: INFO: Pod "pod-08ec3b05-da53-45e8-b9a5-a27d0f52fd34": Phase="Pending", Reason="", readiness=false. Elapsed: 49.798703ms
Aug 11 12:34:59.712: INFO: Pod "pod-08ec3b05-da53-45e8-b9a5-a27d0f52fd34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194976962s
Aug 11 12:35:01.716: INFO: Pod "pod-08ec3b05-da53-45e8-b9a5-a27d0f52fd34": Phase="Running", Reason="", readiness=true. Elapsed: 4.199280393s
Aug 11 12:35:03.724: INFO: Pod "pod-08ec3b05-da53-45e8-b9a5-a27d0f52fd34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.207053786s
STEP: Saw pod success
Aug 11 12:35:03.724: INFO: Pod "pod-08ec3b05-da53-45e8-b9a5-a27d0f52fd34" satisfied condition "Succeeded or Failed"
Aug 11 12:35:03.727: INFO: Trying to get logs from node kali-worker2 pod pod-08ec3b05-da53-45e8-b9a5-a27d0f52fd34 container test-container: 
STEP: delete the pod
Aug 11 12:35:03.775: INFO: Waiting for pod pod-08ec3b05-da53-45e8-b9a5-a27d0f52fd34 to disappear
Aug 11 12:35:03.850: INFO: Pod pod-08ec3b05-da53-45e8-b9a5-a27d0f52fd34 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:35:03.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2917" for this suite.

• [SLOW TEST:6.513 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":133,"skipped":2196,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:35:03.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-map-0f85d417-581e-41d7-8479-908e7957fc30
STEP: Creating a pod to test consume secrets
Aug 11 12:35:04.317: INFO: Waiting up to 5m0s for pod "pod-secrets-f2481a49-9ba6-4e22-8580-cb6c4b7f19d7" in namespace "secrets-2951" to be "Succeeded or Failed"
Aug 11 12:35:04.323: INFO: Pod "pod-secrets-f2481a49-9ba6-4e22-8580-cb6c4b7f19d7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.943487ms
Aug 11 12:35:06.327: INFO: Pod "pod-secrets-f2481a49-9ba6-4e22-8580-cb6c4b7f19d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009793808s
Aug 11 12:35:08.330: INFO: Pod "pod-secrets-f2481a49-9ba6-4e22-8580-cb6c4b7f19d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013330904s
STEP: Saw pod success
Aug 11 12:35:08.330: INFO: Pod "pod-secrets-f2481a49-9ba6-4e22-8580-cb6c4b7f19d7" satisfied condition "Succeeded or Failed"
Aug 11 12:35:08.333: INFO: Trying to get logs from node kali-worker pod pod-secrets-f2481a49-9ba6-4e22-8580-cb6c4b7f19d7 container secret-volume-test: 
STEP: delete the pod
Aug 11 12:35:08.714: INFO: Waiting for pod pod-secrets-f2481a49-9ba6-4e22-8580-cb6c4b7f19d7 to disappear
Aug 11 12:35:08.725: INFO: Pod pod-secrets-f2481a49-9ba6-4e22-8580-cb6c4b7f19d7 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:35:08.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2951" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":134,"skipped":2214,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:35:08.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-eba49e54-5985-4161-a2cf-047f5e1f2558
STEP: Creating a pod to test consume configMaps
Aug 11 12:35:08.847: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2e9c75bb-905b-4a02-8805-78bff9ad2c85" in namespace "projected-622" to be "Succeeded or Failed"
Aug 11 12:35:08.909: INFO: Pod "pod-projected-configmaps-2e9c75bb-905b-4a02-8805-78bff9ad2c85": Phase="Pending", Reason="", readiness=false. Elapsed: 62.412314ms
Aug 11 12:35:10.915: INFO: Pod "pod-projected-configmaps-2e9c75bb-905b-4a02-8805-78bff9ad2c85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067626012s
Aug 11 12:35:12.918: INFO: Pod "pod-projected-configmaps-2e9c75bb-905b-4a02-8805-78bff9ad2c85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071237206s
STEP: Saw pod success
Aug 11 12:35:12.918: INFO: Pod "pod-projected-configmaps-2e9c75bb-905b-4a02-8805-78bff9ad2c85" satisfied condition "Succeeded or Failed"
Aug 11 12:35:12.922: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-2e9c75bb-905b-4a02-8805-78bff9ad2c85 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 11 12:35:12.936: INFO: Waiting for pod pod-projected-configmaps-2e9c75bb-905b-4a02-8805-78bff9ad2c85 to disappear
Aug 11 12:35:12.941: INFO: Pod pod-projected-configmaps-2e9c75bb-905b-4a02-8805-78bff9ad2c85 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:35:12.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-622" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":135,"skipped":2224,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:35:12.948: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod busybox-ef36be4e-0cba-4dbe-a1ef-2acfbc21be0c in namespace container-probe-9367
Aug 11 12:35:17.296: INFO: Started pod busybox-ef36be4e-0cba-4dbe-a1ef-2acfbc21be0c in namespace container-probe-9367
STEP: checking the pod's current state and verifying that restartCount is present
Aug 11 12:35:17.299: INFO: Initial restart count of pod busybox-ef36be4e-0cba-4dbe-a1ef-2acfbc21be0c is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:39:18.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9367" for this suite.

• [SLOW TEST:245.171 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":136,"skipped":2246,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:39:18.120: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Aug 11 12:39:18.200: INFO: >>> kubeConfig: /root/.kube/config
Aug 11 12:39:21.121: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:39:31.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9705" for this suite.

• [SLOW TEST:13.675 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":137,"skipped":2257,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:39:31.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-01c154e9-e2be-45cb-813c-62c23817c74f
STEP: Creating a pod to test consume secrets
Aug 11 12:39:31.911: INFO: Waiting up to 5m0s for pod "pod-secrets-2a798032-4891-4aa0-817a-7134e9d8fa1d" in namespace "secrets-2600" to be "Succeeded or Failed"
Aug 11 12:39:31.931: INFO: Pod "pod-secrets-2a798032-4891-4aa0-817a-7134e9d8fa1d": Phase="Pending", Reason="", readiness=false. Elapsed: 19.909392ms
Aug 11 12:39:33.934: INFO: Pod "pod-secrets-2a798032-4891-4aa0-817a-7134e9d8fa1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023816514s
Aug 11 12:39:35.938: INFO: Pod "pod-secrets-2a798032-4891-4aa0-817a-7134e9d8fa1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027020287s
STEP: Saw pod success
Aug 11 12:39:35.938: INFO: Pod "pod-secrets-2a798032-4891-4aa0-817a-7134e9d8fa1d" satisfied condition "Succeeded or Failed"
Aug 11 12:39:35.940: INFO: Trying to get logs from node kali-worker pod pod-secrets-2a798032-4891-4aa0-817a-7134e9d8fa1d container secret-env-test: 
STEP: delete the pod
Aug 11 12:39:36.056: INFO: Waiting for pod pod-secrets-2a798032-4891-4aa0-817a-7134e9d8fa1d to disappear
Aug 11 12:39:36.064: INFO: Pod pod-secrets-2a798032-4891-4aa0-817a-7134e9d8fa1d no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:39:36.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2600" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":138,"skipped":2268,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:39:36.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name secret-emptykey-test-6d91aa98-dfa5-4360-a7c1-006b1147a079
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:39:36.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4369" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":139,"skipped":2274,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:39:36.146: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:40:36.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6055" for this suite.

• [SLOW TEST:60.173 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":140,"skipped":2285,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:40:36.319: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 11 12:40:44.487: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 11 12:40:44.509: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 11 12:40:46.509: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 11 12:40:46.527: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 11 12:40:48.509: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 11 12:40:48.529: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 11 12:40:50.509: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 11 12:40:50.513: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 11 12:40:52.509: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 11 12:40:52.514: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 11 12:40:54.509: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 11 12:40:54.514: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:40:54.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4904" for this suite.

• [SLOW TEST:18.205 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":141,"skipped":2299,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:40:54.525: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:41:05.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9428" for this suite.

• [SLOW TEST:11.146 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":142,"skipped":2346,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:41:05.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 12:41:05.817: INFO: Creating deployment "test-recreate-deployment"
Aug 11 12:41:05.822: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Aug 11 12:41:05.830: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Aug 11 12:41:07.837: INFO: Waiting deployment "test-recreate-deployment" to complete
Aug 11 12:41:07.839: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732746465, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732746465, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732746465, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732746465, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:41:09.843: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Aug 11 12:41:09.850: INFO: Updating deployment test-recreate-deployment
Aug 11 12:41:09.850: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Aug 11 12:41:10.424: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-2886 /apis/apps/v1/namespaces/deployment-2886/deployments/test-recreate-deployment f10ae7a3-7b1f-46d5-b6c4-a0980f61226b 8564992 2 2020-08-11 12:41:05 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-08-11 12:41:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-11 12:41:10 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003a974e8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-11 12:41:10 +0000 UTC,LastTransitionTime:2020-08-11 12:41:10 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-08-11 12:41:10 +0000 UTC,LastTransitionTime:2020-08-11 12:41:05 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Aug 11 12:41:10.427: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7  deployment-2886 /apis/apps/v1/namespaces/deployment-2886/replicasets/test-recreate-deployment-d5667d9c7 574be5d4-7617-4c9b-ba44-bf26528149c3 8564990 1 2020-08-11 12:41:10 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment f10ae7a3-7b1f-46d5-b6c4-a0980f61226b 0xc003ae0cb0 0xc003ae0cb1}] []  [{kube-controller-manager Update apps/v1 2020-08-11 12:41:10 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 49 48 97 101 55 97 51 45 55 98 49 102 45 52 54 100 53 45 98 54 99 52 45 97 48 57 56 48 102 54 49 50 50 54 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003ae0d68  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 11 12:41:10.427: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Aug 11 12:41:10.427: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-74d98b5f7c  deployment-2886 /apis/apps/v1/namespaces/deployment-2886/replicasets/test-recreate-deployment-74d98b5f7c 85385312-7ad1-4fcc-ba53-5cf6b231ae59 8564980 2 2020-08-11 12:41:05 +0000 UTC   map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment f10ae7a3-7b1f-46d5-b6c4-a0980f61226b 0xc003ae0b67 0xc003ae0b68}] []  [{kube-controller-manager Update apps/v1 2020-08-11 12:41:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 49 48 97 101 55 97 51 45 55 98 49 102 45 52 54 100 53 45 98 54 99 52 45 97 48 57 56 48 102 54 49 50 50 54 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 74d98b5f7c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003ae0c28  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 11 12:41:10.430: INFO: Pod "test-recreate-deployment-d5667d9c7-dpvxn" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-dpvxn test-recreate-deployment-d5667d9c7- deployment-2886 /api/v1/namespaces/deployment-2886/pods/test-recreate-deployment-d5667d9c7-dpvxn 42b013da-489a-4b99-ab64-f89c1919f6f3 8564991 0 2020-08-11 12:41:10 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 574be5d4-7617-4c9b-ba44-bf26528149c3 0xc003ae13d0 0xc003ae13d1}] []  [{kube-controller-manager Update v1 2020-08-11 12:41:10 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 55 52 98 101 53 100 52 45 55 54 49 55 45 52 99 57 98 45 98 97 52 52 45 98 102 50 54 53 50 56 49 52 57 99 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 12:41:10 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fc6hc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fc6hc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fc6hc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 12:41:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 12:41:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 12:41:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 12:41:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-11 12:41:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:41:10.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2886" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":143,"skipped":2355,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:41:10.439: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:41:10.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9414" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":144,"skipped":2360,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:41:10.590: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 11 12:41:11.943: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:41:12.022: INFO: Number of nodes with available pods: 0
Aug 11 12:41:12.022: INFO: Node kali-worker is running more than one daemon pod
Aug 11 12:41:13.158: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:41:13.162: INFO: Number of nodes with available pods: 0
Aug 11 12:41:13.162: INFO: Node kali-worker is running more than one daemon pod
Aug 11 12:41:14.034: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:41:14.075: INFO: Number of nodes with available pods: 0
Aug 11 12:41:14.075: INFO: Node kali-worker is running more than one daemon pod
Aug 11 12:41:15.027: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:41:15.030: INFO: Number of nodes with available pods: 0
Aug 11 12:41:15.030: INFO: Node kali-worker is running more than one daemon pod
Aug 11 12:41:16.267: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:41:16.270: INFO: Number of nodes with available pods: 0
Aug 11 12:41:16.270: INFO: Node kali-worker is running more than one daemon pod
Aug 11 12:41:17.032: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:41:17.043: INFO: Number of nodes with available pods: 0
Aug 11 12:41:17.043: INFO: Node kali-worker is running more than one daemon pod
Aug 11 12:41:18.074: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:41:18.077: INFO: Number of nodes with available pods: 2
Aug 11 12:41:18.077: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Aug 11 12:41:18.144: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:41:18.148: INFO: Number of nodes with available pods: 1
Aug 11 12:41:18.148: INFO: Node kali-worker2 is running more than one daemon pod
Aug 11 12:41:19.153: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:41:19.157: INFO: Number of nodes with available pods: 1
Aug 11 12:41:19.157: INFO: Node kali-worker2 is running more than one daemon pod
Aug 11 12:41:20.153: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:41:20.157: INFO: Number of nodes with available pods: 1
Aug 11 12:41:20.157: INFO: Node kali-worker2 is running more than one daemon pod
Aug 11 12:41:21.307: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:41:21.327: INFO: Number of nodes with available pods: 1
Aug 11 12:41:21.327: INFO: Node kali-worker2 is running more than one daemon pod
Aug 11 12:41:22.201: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:41:22.205: INFO: Number of nodes with available pods: 1
Aug 11 12:41:22.205: INFO: Node kali-worker2 is running more than one daemon pod
Aug 11 12:41:23.155: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:41:23.160: INFO: Number of nodes with available pods: 1
Aug 11 12:41:23.160: INFO: Node kali-worker2 is running more than one daemon pod
Aug 11 12:41:24.153: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:41:24.158: INFO: Number of nodes with available pods: 1
Aug 11 12:41:24.158: INFO: Node kali-worker2 is running more than one daemon pod
Aug 11 12:41:25.153: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:41:25.157: INFO: Number of nodes with available pods: 1
Aug 11 12:41:25.157: INFO: Node kali-worker2 is running more than one daemon pod
Aug 11 12:41:26.154: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:41:26.158: INFO: Number of nodes with available pods: 1
Aug 11 12:41:26.158: INFO: Node kali-worker2 is running more than one daemon pod
Aug 11 12:41:27.153: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:41:27.157: INFO: Number of nodes with available pods: 1
Aug 11 12:41:27.157: INFO: Node kali-worker2 is running more than one daemon pod
Aug 11 12:41:28.153: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:41:28.157: INFO: Number of nodes with available pods: 1
Aug 11 12:41:28.157: INFO: Node kali-worker2 is running more than one daemon pod
Aug 11 12:41:29.153: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:41:29.157: INFO: Number of nodes with available pods: 1
Aug 11 12:41:29.157: INFO: Node kali-worker2 is running more than one daemon pod
Aug 11 12:41:30.154: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:41:30.157: INFO: Number of nodes with available pods: 1
Aug 11 12:41:30.157: INFO: Node kali-worker2 is running more than one daemon pod
Aug 11 12:41:31.152: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:41:31.156: INFO: Number of nodes with available pods: 1
Aug 11 12:41:31.156: INFO: Node kali-worker2 is running more than one daemon pod
Aug 11 12:41:32.160: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:41:32.163: INFO: Number of nodes with available pods: 1
Aug 11 12:41:32.163: INFO: Node kali-worker2 is running more than one daemon pod
Aug 11 12:41:33.398: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:41:33.401: INFO: Number of nodes with available pods: 1
Aug 11 12:41:33.401: INFO: Node kali-worker2 is running more than one daemon pod
Aug 11 12:41:34.154: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:41:34.157: INFO: Number of nodes with available pods: 1
Aug 11 12:41:34.157: INFO: Node kali-worker2 is running more than one daemon pod
Aug 11 12:41:35.272: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:41:35.276: INFO: Number of nodes with available pods: 1
Aug 11 12:41:35.276: INFO: Node kali-worker2 is running more than one daemon pod
Aug 11 12:41:36.153: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:41:36.157: INFO: Number of nodes with available pods: 1
Aug 11 12:41:36.157: INFO: Node kali-worker2 is running more than one daemon pod
Aug 11 12:41:37.175: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 12:41:37.186: INFO: Number of nodes with available pods: 2
Aug 11 12:41:37.186: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-52, will wait for the garbage collector to delete the pods
Aug 11 12:41:37.249: INFO: Deleting DaemonSet.extensions daemon-set took: 6.518125ms
Aug 11 12:41:37.549: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.284208ms
Aug 11 12:41:43.475: INFO: Number of nodes with available pods: 0
Aug 11 12:41:43.475: INFO: Number of running nodes: 0, number of available pods: 0
Aug 11 12:41:43.478: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-52/daemonsets","resourceVersion":"8565195"},"items":null}

Aug 11 12:41:43.480: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-52/pods","resourceVersion":"8565195"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:41:43.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-52" for this suite.

• [SLOW TEST:32.907 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":145,"skipped":2378,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:41:43.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0811 12:41:44.881995       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 11 12:41:44.882: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:41:44.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3734" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":146,"skipped":2394,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:41:44.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 12:41:44.958: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-6744423e-6bed-42a8-9bba-82dcdaea1680" in namespace "security-context-test-9941" to be "Succeeded or Failed"
Aug 11 12:41:45.061: INFO: Pod "alpine-nnp-false-6744423e-6bed-42a8-9bba-82dcdaea1680": Phase="Pending", Reason="", readiness=false. Elapsed: 102.858883ms
Aug 11 12:41:47.065: INFO: Pod "alpine-nnp-false-6744423e-6bed-42a8-9bba-82dcdaea1680": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106349911s
Aug 11 12:41:49.068: INFO: Pod "alpine-nnp-false-6744423e-6bed-42a8-9bba-82dcdaea1680": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109488966s
Aug 11 12:41:51.072: INFO: Pod "alpine-nnp-false-6744423e-6bed-42a8-9bba-82dcdaea1680": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.113856081s
Aug 11 12:41:51.072: INFO: Pod "alpine-nnp-false-6744423e-6bed-42a8-9bba-82dcdaea1680" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:41:51.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9941" for this suite.

• [SLOW TEST:6.286 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when creating containers with AllowPrivilegeEscalation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":147,"skipped":2424,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:41:51.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0811 12:42:01.580636       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 11 12:42:01.580: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:42:01.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2869" for this suite.

• [SLOW TEST:10.433 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":148,"skipped":2453,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:42:01.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7485.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-7485.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7485.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7485.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-7485.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7485.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 11 12:42:07.725: INFO: DNS probes using dns-7485/dns-test-0a463659-feee-4153-add1-4a3cc96e5dcf succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:42:07.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7485" for this suite.

• [SLOW TEST:6.199 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":149,"skipped":2489,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:42:07.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 11 12:42:08.076: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1379300a-e466-4393-ac67-b632a4e3534d" in namespace "downward-api-5919" to be "Succeeded or Failed"
Aug 11 12:42:08.161: INFO: Pod "downwardapi-volume-1379300a-e466-4393-ac67-b632a4e3534d": Phase="Pending", Reason="", readiness=false. Elapsed: 84.307473ms
Aug 11 12:42:10.166: INFO: Pod "downwardapi-volume-1379300a-e466-4393-ac67-b632a4e3534d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089609706s
Aug 11 12:42:12.171: INFO: Pod "downwardapi-volume-1379300a-e466-4393-ac67-b632a4e3534d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094420334s
STEP: Saw pod success
Aug 11 12:42:12.171: INFO: Pod "downwardapi-volume-1379300a-e466-4393-ac67-b632a4e3534d" satisfied condition "Succeeded or Failed"
Aug 11 12:42:12.174: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-1379300a-e466-4393-ac67-b632a4e3534d container client-container: 
STEP: delete the pod
Aug 11 12:42:12.464: INFO: Waiting for pod downwardapi-volume-1379300a-e466-4393-ac67-b632a4e3534d to disappear
Aug 11 12:42:12.472: INFO: Pod downwardapi-volume-1379300a-e466-4393-ac67-b632a4e3534d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:42:12.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5919" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":150,"skipped":2493,"failed":0}

------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:42:12.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Aug 11 12:42:12.831: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-5375 /api/v1/namespaces/watch-5375/configmaps/e2e-watch-test-watch-closed d0813d59-920b-4bb6-b2ee-b9133e1c6927 8565470 0 2020-08-11 12:42:12 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-08-11 12:42:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 11 12:42:12.831: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-5375 /api/v1/namespaces/watch-5375/configmaps/e2e-watch-test-watch-closed d0813d59-920b-4bb6-b2ee-b9133e1c6927 8565471 0 2020-08-11 12:42:12 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-08-11 12:42:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Aug 11 12:42:12.894: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-5375 /api/v1/namespaces/watch-5375/configmaps/e2e-watch-test-watch-closed d0813d59-920b-4bb6-b2ee-b9133e1c6927 8565473 0 2020-08-11 12:42:12 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-08-11 12:42:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 11 12:42:12.894: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-5375 /api/v1/namespaces/watch-5375/configmaps/e2e-watch-test-watch-closed d0813d59-920b-4bb6-b2ee-b9133e1c6927 8565475 0 2020-08-11 12:42:12 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-08-11 12:42:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:42:12.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5375" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":151,"skipped":2493,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] LimitRange
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:42:12.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename limitrange
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a LimitRange
STEP: Setting up watch
STEP: Submitting a LimitRange
Aug 11 12:42:12.989: INFO: observed the limitRanges list
STEP: Verifying LimitRange creation was observed
STEP: Fetching the LimitRange to ensure it has proper values
Aug 11 12:42:13.023: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
Aug 11 12:42:13.023: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with no resource requirements
STEP: Ensuring Pod has resource requirements applied from LimitRange
Aug 11 12:42:13.066: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
Aug 11 12:42:13.066: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with partial resource requirements
STEP: Ensuring Pod has merged resource requirements applied from LimitRange
Aug 11 12:42:13.170: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}]
Aug 11 12:42:13.170: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Failing to create a Pod with less than min resources
STEP: Failing to create a Pod with more than max resources
STEP: Updating a LimitRange
STEP: Verifying LimitRange updating is effective
STEP: Creating a Pod with less than former min resources
STEP: Failing to create a Pod with more than max resources
STEP: Deleting a LimitRange
STEP: Verifying the LimitRange was deleted
Aug 11 12:42:20.628: INFO: limitRange is already deleted
STEP: Creating a Pod with more than former max resources
[AfterEach] [sig-scheduling] LimitRange
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:42:20.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "limitrange-3046" for this suite.

• [SLOW TEST:7.910 seconds]
[sig-scheduling] LimitRange
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":152,"skipped":2511,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:42:20.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a replication controller
Aug 11 12:42:20.889: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1999'
Aug 11 12:42:21.490: INFO: stderr: ""
Aug 11 12:42:21.490: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 11 12:42:21.490: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1999'
Aug 11 12:42:21.632: INFO: stderr: ""
Aug 11 12:42:21.632: INFO: stdout: "update-demo-nautilus-nglpd update-demo-nautilus-ttkm8 "
Aug 11 12:42:21.633: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nglpd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1999'
Aug 11 12:42:21.781: INFO: stderr: ""
Aug 11 12:42:21.781: INFO: stdout: ""
Aug 11 12:42:21.781: INFO: update-demo-nautilus-nglpd is created but not running
Aug 11 12:42:26.781: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1999'
Aug 11 12:42:27.146: INFO: stderr: ""
Aug 11 12:42:27.146: INFO: stdout: "update-demo-nautilus-nglpd update-demo-nautilus-ttkm8 "
Aug 11 12:42:27.146: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nglpd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1999'
Aug 11 12:42:27.373: INFO: stderr: ""
Aug 11 12:42:27.373: INFO: stdout: "true"
Aug 11 12:42:27.373: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nglpd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1999'
Aug 11 12:42:27.469: INFO: stderr: ""
Aug 11 12:42:27.469: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 11 12:42:27.469: INFO: validating pod update-demo-nautilus-nglpd
Aug 11 12:42:27.560: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 11 12:42:27.560: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 11 12:42:27.560: INFO: update-demo-nautilus-nglpd is verified up and running
Aug 11 12:42:27.560: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ttkm8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1999'
Aug 11 12:42:27.653: INFO: stderr: ""
Aug 11 12:42:27.653: INFO: stdout: "true"
Aug 11 12:42:27.653: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ttkm8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1999'
Aug 11 12:42:27.758: INFO: stderr: ""
Aug 11 12:42:27.758: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 11 12:42:27.758: INFO: validating pod update-demo-nautilus-ttkm8
Aug 11 12:42:27.849: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 11 12:42:27.849: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 11 12:42:27.849: INFO: update-demo-nautilus-ttkm8 is verified up and running
STEP: using delete to clean up resources
Aug 11 12:42:27.849: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1999'
Aug 11 12:42:27.982: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 11 12:42:27.982: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 11 12:42:27.983: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1999'
Aug 11 12:42:28.256: INFO: stderr: "No resources found in kubectl-1999 namespace.\n"
Aug 11 12:42:28.256: INFO: stdout: ""
Aug 11 12:42:28.256: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1999 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 11 12:42:28.671: INFO: stderr: ""
Aug 11 12:42:28.671: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:42:28.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1999" for this suite.

• [SLOW TEST:8.228 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":275,"completed":153,"skipped":2514,"failed":0}
SSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:42:29.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap that has name configmap-test-emptyKey-ef5b48ac-29d1-4e18-a2ac-5bdb45b96c0f
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:42:29.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4339" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":154,"skipped":2517,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:42:29.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:42:33.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7782" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":155,"skipped":2538,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:42:33.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 12:42:38.026: INFO: Waiting up to 5m0s for pod "client-envvars-90d086b0-3113-438f-b383-ede9dcac4fc2" in namespace "pods-3771" to be "Succeeded or Failed"
Aug 11 12:42:38.041: INFO: Pod "client-envvars-90d086b0-3113-438f-b383-ede9dcac4fc2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.978655ms
Aug 11 12:42:40.140: INFO: Pod "client-envvars-90d086b0-3113-438f-b383-ede9dcac4fc2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113863403s
Aug 11 12:42:42.144: INFO: Pod "client-envvars-90d086b0-3113-438f-b383-ede9dcac4fc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.117672968s
STEP: Saw pod success
Aug 11 12:42:42.144: INFO: Pod "client-envvars-90d086b0-3113-438f-b383-ede9dcac4fc2" satisfied condition "Succeeded or Failed"
Aug 11 12:42:42.147: INFO: Trying to get logs from node kali-worker pod client-envvars-90d086b0-3113-438f-b383-ede9dcac4fc2 container env3cont: 
STEP: delete the pod
Aug 11 12:42:42.195: INFO: Waiting for pod client-envvars-90d086b0-3113-438f-b383-ede9dcac4fc2 to disappear
Aug 11 12:42:42.213: INFO: Pod client-envvars-90d086b0-3113-438f-b383-ede9dcac4fc2 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:42:42.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3771" for this suite.

• [SLOW TEST:8.367 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":156,"skipped":2573,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:42:42.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 11 12:42:46.486: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:42:46.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5995" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":157,"skipped":2581,"failed":0}

------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:42:46.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Aug 11 12:42:51.864: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:42:51.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-1914" for this suite.

• [SLOW TEST:5.360 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":158,"skipped":2581,"failed":0}
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:42:52.022: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Aug 11 12:42:58.161: INFO: &Pod{ObjectMeta:{send-events-9ca32d10-b6ef-4d03-a539-b3c3852ae840  events-9796 /api/v1/namespaces/events-9796/pods/send-events-9ca32d10-b6ef-4d03-a539-b3c3852ae840 2eee2366-7ee3-4ade-a19f-f9d096107cae 8565893 0 2020-08-11 12:42:52 +0000 UTC   map[name:foo time:135942037] map[] [] []  [{e2e.test Update v1 2020-08-11 12:42:52 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 116 105 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 112 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 114 116 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 99 111 110 116 97 105 110 101 114 80 111 114 116 92 34 58 56 48 44 92 34 112 114 111 116 111 99 111 108 92 34 58 92 34 84 67 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 80 111 114 116 34 58 123 125 44 34 102 58 112 114 111 116 111 99 111 108 34 58 123 125 125 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 12:42:56 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 48 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d8sc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d8sc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d8sc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 12:42:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 12:42:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 12:42:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 12:42:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.105,StartTime:2020-08-11 12:42:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-11 12:42:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://00a437ebba07ca876f65f43f4717e85616d45bc8e5023cbb76ac1647fa8885be,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.105,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Aug 11 12:43:00.166: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Aug 11 12:43:02.171: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:43:02.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-9796" for this suite.

• [SLOW TEST:10.215 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":275,"completed":159,"skipped":2581,"failed":0}
SS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:43:02.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating server pod server in namespace prestop-9507
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-9507
STEP: Deleting pre-stop pod
Aug 11 12:43:15.371: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:43:15.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-9507" for this suite.

• [SLOW TEST:13.185 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":275,"completed":160,"skipped":2583,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:43:15.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:43:33.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7729" for this suite.

• [SLOW TEST:18.140 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":161,"skipped":2597,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:43:33.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-17e1ac1c-2e9a-4988-b50c-759b5bc3d3bf
STEP: Creating a pod to test consume configMaps
Aug 11 12:43:33.661: INFO: Waiting up to 5m0s for pod "pod-configmaps-752b98b3-d0ff-4077-b0e8-0e3ac629d5cc" in namespace "configmap-3610" to be "Succeeded or Failed"
Aug 11 12:43:33.680: INFO: Pod "pod-configmaps-752b98b3-d0ff-4077-b0e8-0e3ac629d5cc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.648131ms
Aug 11 12:43:36.015: INFO: Pod "pod-configmaps-752b98b3-d0ff-4077-b0e8-0e3ac629d5cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.353568761s
Aug 11 12:43:38.019: INFO: Pod "pod-configmaps-752b98b3-d0ff-4077-b0e8-0e3ac629d5cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.358141292s
STEP: Saw pod success
Aug 11 12:43:38.019: INFO: Pod "pod-configmaps-752b98b3-d0ff-4077-b0e8-0e3ac629d5cc" satisfied condition "Succeeded or Failed"
Aug 11 12:43:38.023: INFO: Trying to get logs from node kali-worker pod pod-configmaps-752b98b3-d0ff-4077-b0e8-0e3ac629d5cc container configmap-volume-test: 
STEP: delete the pod
Aug 11 12:43:38.068: INFO: Waiting for pod pod-configmaps-752b98b3-d0ff-4077-b0e8-0e3ac629d5cc to disappear
Aug 11 12:43:38.122: INFO: Pod pod-configmaps-752b98b3-d0ff-4077-b0e8-0e3ac629d5cc no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:43:38.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3610" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":162,"skipped":2614,"failed":0}

------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:43:38.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 12:43:38.253: INFO: (0) /api/v1/nodes/kali-worker/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-2dc97121-a742-4691-8f7a-f3b04abe0090
STEP: Creating a pod to test consume configMaps
Aug 11 12:43:38.437: INFO: Waiting up to 5m0s for pod "pod-configmaps-ba0c1ab4-2e03-46b9-adc4-8c77cd9f6655" in namespace "configmap-9937" to be "Succeeded or Failed"
Aug 11 12:43:38.517: INFO: Pod "pod-configmaps-ba0c1ab4-2e03-46b9-adc4-8c77cd9f6655": Phase="Pending", Reason="", readiness=false. Elapsed: 80.525648ms
Aug 11 12:43:40.553: INFO: Pod "pod-configmaps-ba0c1ab4-2e03-46b9-adc4-8c77cd9f6655": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115974926s
Aug 11 12:43:42.557: INFO: Pod "pod-configmaps-ba0c1ab4-2e03-46b9-adc4-8c77cd9f6655": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.120322299s
STEP: Saw pod success
Aug 11 12:43:42.557: INFO: Pod "pod-configmaps-ba0c1ab4-2e03-46b9-adc4-8c77cd9f6655" satisfied condition "Succeeded or Failed"
Aug 11 12:43:42.560: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-ba0c1ab4-2e03-46b9-adc4-8c77cd9f6655 container configmap-volume-test: 
STEP: delete the pod
Aug 11 12:43:42.653: INFO: Waiting for pod pod-configmaps-ba0c1ab4-2e03-46b9-adc4-8c77cd9f6655 to disappear
Aug 11 12:43:42.657: INFO: Pod pod-configmaps-ba0c1ab4-2e03-46b9-adc4-8c77cd9f6655 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:43:42.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9937" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":164,"skipped":2633,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:43:42.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 11 12:43:42.741: INFO: Waiting up to 5m0s for pod "pod-e7c825c1-fb50-4392-be0f-39f0d4615251" in namespace "emptydir-7579" to be "Succeeded or Failed"
Aug 11 12:43:42.758: INFO: Pod "pod-e7c825c1-fb50-4392-be0f-39f0d4615251": Phase="Pending", Reason="", readiness=false. Elapsed: 17.480253ms
Aug 11 12:43:44.764: INFO: Pod "pod-e7c825c1-fb50-4392-be0f-39f0d4615251": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022696425s
Aug 11 12:43:46.768: INFO: Pod "pod-e7c825c1-fb50-4392-be0f-39f0d4615251": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026979501s
STEP: Saw pod success
Aug 11 12:43:46.768: INFO: Pod "pod-e7c825c1-fb50-4392-be0f-39f0d4615251" satisfied condition "Succeeded or Failed"
Aug 11 12:43:46.772: INFO: Trying to get logs from node kali-worker pod pod-e7c825c1-fb50-4392-be0f-39f0d4615251 container test-container: 
STEP: delete the pod
Aug 11 12:43:47.136: INFO: Waiting for pod pod-e7c825c1-fb50-4392-be0f-39f0d4615251 to disappear
Aug 11 12:43:47.171: INFO: Pod pod-e7c825c1-fb50-4392-be0f-39f0d4615251 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:43:47.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7579" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":165,"skipped":2644,"failed":0}
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:43:47.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-3b9abe0d-216a-49a0-9172-4b6b398dd60d
STEP: Creating a pod to test consume configMaps
Aug 11 12:43:47.305: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ac4efa8e-5542-483a-b162-a041b0553472" in namespace "projected-3148" to be "Succeeded or Failed"
Aug 11 12:43:47.333: INFO: Pod "pod-projected-configmaps-ac4efa8e-5542-483a-b162-a041b0553472": Phase="Pending", Reason="", readiness=false. Elapsed: 27.946875ms
Aug 11 12:43:49.374: INFO: Pod "pod-projected-configmaps-ac4efa8e-5542-483a-b162-a041b0553472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068231462s
Aug 11 12:43:51.377: INFO: Pod "pod-projected-configmaps-ac4efa8e-5542-483a-b162-a041b0553472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071674042s
STEP: Saw pod success
Aug 11 12:43:51.377: INFO: Pod "pod-projected-configmaps-ac4efa8e-5542-483a-b162-a041b0553472" satisfied condition "Succeeded or Failed"
Aug 11 12:43:51.379: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-ac4efa8e-5542-483a-b162-a041b0553472 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 11 12:43:51.411: INFO: Waiting for pod pod-projected-configmaps-ac4efa8e-5542-483a-b162-a041b0553472 to disappear
Aug 11 12:43:51.418: INFO: Pod pod-projected-configmaps-ac4efa8e-5542-483a-b162-a041b0553472 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:43:51.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3148" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":166,"skipped":2648,"failed":0}

------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:43:51.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Aug 11 12:43:51.498: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:43:59.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8590" for this suite.

• [SLOW TEST:8.132 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":167,"skipped":2648,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:43:59.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service nodeport-service with the type=NodePort in namespace services-3149
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-3149
STEP: creating replication controller externalsvc in namespace services-3149
I0811 12:44:00.625747       7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-3149, replica count: 2
I0811 12:44:03.676271       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0811 12:44:06.676518       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Aug 11 12:44:06.764: INFO: Creating new exec pod
Aug 11 12:44:10.801: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-3149 execpod4sd7q -- /bin/sh -x -c nslookup nodeport-service'
Aug 11 12:44:16.923: INFO: stderr: "I0811 12:44:16.815805    1591 log.go:172] (0xc00003abb0) (0xc000ad6280) Create stream\nI0811 12:44:16.815871    1591 log.go:172] (0xc00003abb0) (0xc000ad6280) Stream added, broadcasting: 1\nI0811 12:44:16.818476    1591 log.go:172] (0xc00003abb0) Reply frame received for 1\nI0811 12:44:16.818514    1591 log.go:172] (0xc00003abb0) (0xc0005c9680) Create stream\nI0811 12:44:16.818527    1591 log.go:172] (0xc00003abb0) (0xc0005c9680) Stream added, broadcasting: 3\nI0811 12:44:16.819400    1591 log.go:172] (0xc00003abb0) Reply frame received for 3\nI0811 12:44:16.819433    1591 log.go:172] (0xc00003abb0) (0xc0004e6aa0) Create stream\nI0811 12:44:16.819453    1591 log.go:172] (0xc00003abb0) (0xc0004e6aa0) Stream added, broadcasting: 5\nI0811 12:44:16.820261    1591 log.go:172] (0xc00003abb0) Reply frame received for 5\nI0811 12:44:16.904999    1591 log.go:172] (0xc00003abb0) Data frame received for 5\nI0811 12:44:16.905025    1591 log.go:172] (0xc0004e6aa0) (5) Data frame handling\nI0811 12:44:16.905045    1591 log.go:172] (0xc0004e6aa0) (5) Data frame sent\n+ nslookup nodeport-service\nI0811 12:44:16.912605    1591 log.go:172] (0xc00003abb0) Data frame received for 3\nI0811 12:44:16.912623    1591 log.go:172] (0xc0005c9680) (3) Data frame handling\nI0811 12:44:16.912637    1591 log.go:172] (0xc0005c9680) (3) Data frame sent\nI0811 12:44:16.913621    1591 log.go:172] (0xc00003abb0) Data frame received for 3\nI0811 12:44:16.913659    1591 log.go:172] (0xc0005c9680) (3) Data frame handling\nI0811 12:44:16.913696    1591 log.go:172] (0xc0005c9680) (3) Data frame sent\nI0811 12:44:16.914069    1591 log.go:172] (0xc00003abb0) Data frame received for 5\nI0811 12:44:16.914120    1591 log.go:172] (0xc0004e6aa0) (5) Data frame handling\nI0811 12:44:16.914155    1591 log.go:172] (0xc00003abb0) Data frame received for 3\nI0811 12:44:16.914175    1591 log.go:172] (0xc0005c9680) (3) Data frame handling\nI0811 12:44:16.915913    1591 log.go:172] (0xc00003abb0) Data frame received for 1\nI0811 12:44:16.915931    1591 log.go:172] (0xc000ad6280) (1) Data frame handling\nI0811 12:44:16.915941    1591 log.go:172] (0xc000ad6280) (1) Data frame sent\nI0811 12:44:16.915957    1591 log.go:172] (0xc00003abb0) (0xc000ad6280) Stream removed, broadcasting: 1\nI0811 12:44:16.915975    1591 log.go:172] (0xc00003abb0) Go away received\nI0811 12:44:16.916366    1591 log.go:172] (0xc00003abb0) (0xc000ad6280) Stream removed, broadcasting: 1\nI0811 12:44:16.916386    1591 log.go:172] (0xc00003abb0) (0xc0005c9680) Stream removed, broadcasting: 3\nI0811 12:44:16.916403    1591 log.go:172] (0xc00003abb0) (0xc0004e6aa0) Stream removed, broadcasting: 5\n"
Aug 11 12:44:16.923: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-3149.svc.cluster.local\tcanonical name = externalsvc.services-3149.svc.cluster.local.\nName:\texternalsvc.services-3149.svc.cluster.local\nAddress: 10.106.192.141\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-3149, will wait for the garbage collector to delete the pods
Aug 11 12:44:16.984: INFO: Deleting ReplicationController externalsvc took: 6.746048ms
Aug 11 12:44:17.385: INFO: Terminating ReplicationController externalsvc pods took: 400.233499ms
Aug 11 12:44:21.825: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:44:21.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3149" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:22.365 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":168,"skipped":2666,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should patch a secret [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:44:21.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a secret [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a secret
STEP: listing secrets in all namespaces to ensure that there are more than zero
STEP: patching the secret
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:44:22.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2179" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":169,"skipped":2687,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:44:22.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 12:44:22.289: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Aug 11 12:44:22.300: INFO: Number of nodes with available pods: 0
Aug 11 12:44:22.300: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Aug 11 12:44:22.377: INFO: Number of nodes with available pods: 0
Aug 11 12:44:22.377: INFO: Node kali-worker is running more than one daemon pod
Aug 11 12:44:23.381: INFO: Number of nodes with available pods: 0
Aug 11 12:44:23.381: INFO: Node kali-worker is running more than one daemon pod
Aug 11 12:44:24.544: INFO: Number of nodes with available pods: 0
Aug 11 12:44:24.544: INFO: Node kali-worker is running more than one daemon pod
Aug 11 12:44:25.381: INFO: Number of nodes with available pods: 0
Aug 11 12:44:25.381: INFO: Node kali-worker is running more than one daemon pod
Aug 11 12:44:26.464: INFO: Number of nodes with available pods: 0
Aug 11 12:44:26.464: INFO: Node kali-worker is running more than one daemon pod
Aug 11 12:44:27.404: INFO: Number of nodes with available pods: 1
Aug 11 12:44:27.404: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Aug 11 12:44:27.720: INFO: Number of nodes with available pods: 1
Aug 11 12:44:27.720: INFO: Number of running nodes: 0, number of available pods: 1
Aug 11 12:44:28.954: INFO: Number of nodes with available pods: 0
Aug 11 12:44:28.954: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Aug 11 12:44:29.017: INFO: Number of nodes with available pods: 0
Aug 11 12:44:29.017: INFO: Node kali-worker is running more than one daemon pod
Aug 11 12:44:30.111: INFO: Number of nodes with available pods: 0
Aug 11 12:44:30.111: INFO: Node kali-worker is running more than one daemon pod
Aug 11 12:44:31.021: INFO: Number of nodes with available pods: 0
Aug 11 12:44:31.021: INFO: Node kali-worker is running more than one daemon pod
Aug 11 12:44:32.022: INFO: Number of nodes with available pods: 0
Aug 11 12:44:32.022: INFO: Node kali-worker is running more than one daemon pod
Aug 11 12:44:33.020: INFO: Number of nodes with available pods: 0
Aug 11 12:44:33.021: INFO: Node kali-worker is running more than one daemon pod
Aug 11 12:44:34.022: INFO: Number of nodes with available pods: 0
Aug 11 12:44:34.022: INFO: Node kali-worker is running more than one daemon pod
Aug 11 12:44:35.022: INFO: Number of nodes with available pods: 0
Aug 11 12:44:35.022: INFO: Node kali-worker is running more than one daemon pod
Aug 11 12:44:36.020: INFO: Number of nodes with available pods: 0
Aug 11 12:44:36.020: INFO: Node kali-worker is running more than one daemon pod
Aug 11 12:44:37.022: INFO: Number of nodes with available pods: 1
Aug 11 12:44:37.022: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7754, will wait for the garbage collector to delete the pods
Aug 11 12:44:37.089: INFO: Deleting DaemonSet.extensions daemon-set took: 7.661348ms
Aug 11 12:44:37.489: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.258267ms
Aug 11 12:44:43.493: INFO: Number of nodes with available pods: 0
Aug 11 12:44:43.493: INFO: Number of running nodes: 0, number of available pods: 0
Aug 11 12:44:43.495: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7754/daemonsets","resourceVersion":"8566723"},"items":null}

Aug 11 12:44:43.529: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7754/pods","resourceVersion":"8566724"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:44:43.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7754" for this suite.

• [SLOW TEST:21.466 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":170,"skipped":2733,"failed":0}
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:44:43.590: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 11 12:44:43.673: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4c4229f1-e3cb-4a87-8417-1250a58bed86" in namespace "downward-api-6130" to be "Succeeded or Failed"
Aug 11 12:44:43.695: INFO: Pod "downwardapi-volume-4c4229f1-e3cb-4a87-8417-1250a58bed86": Phase="Pending", Reason="", readiness=false. Elapsed: 21.930609ms
Aug 11 12:44:45.728: INFO: Pod "downwardapi-volume-4c4229f1-e3cb-4a87-8417-1250a58bed86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05475917s
Aug 11 12:44:47.731: INFO: Pod "downwardapi-volume-4c4229f1-e3cb-4a87-8417-1250a58bed86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058562879s
STEP: Saw pod success
Aug 11 12:44:47.731: INFO: Pod "downwardapi-volume-4c4229f1-e3cb-4a87-8417-1250a58bed86" satisfied condition "Succeeded or Failed"
Aug 11 12:44:47.734: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-4c4229f1-e3cb-4a87-8417-1250a58bed86 container client-container: 
STEP: delete the pod
Aug 11 12:44:47.819: INFO: Waiting for pod downwardapi-volume-4c4229f1-e3cb-4a87-8417-1250a58bed86 to disappear
Aug 11 12:44:47.827: INFO: Pod downwardapi-volume-4c4229f1-e3cb-4a87-8417-1250a58bed86 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:44:47.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6130" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":171,"skipped":2733,"failed":0}
S
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:44:47.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:44:48.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-8532" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":172,"skipped":2734,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:44:48.008: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 11 12:44:48.082: INFO: Waiting up to 5m0s for pod "downwardapi-volume-33f74239-e3c8-44d1-9a0e-7513f00c1414" in namespace "projected-4585" to be "Succeeded or Failed"
Aug 11 12:44:48.109: INFO: Pod "downwardapi-volume-33f74239-e3c8-44d1-9a0e-7513f00c1414": Phase="Pending", Reason="", readiness=false. Elapsed: 27.127247ms
Aug 11 12:44:50.143: INFO: Pod "downwardapi-volume-33f74239-e3c8-44d1-9a0e-7513f00c1414": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060826959s
Aug 11 12:44:52.147: INFO: Pod "downwardapi-volume-33f74239-e3c8-44d1-9a0e-7513f00c1414": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064545718s
STEP: Saw pod success
Aug 11 12:44:52.147: INFO: Pod "downwardapi-volume-33f74239-e3c8-44d1-9a0e-7513f00c1414" satisfied condition "Succeeded or Failed"
Aug 11 12:44:52.149: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-33f74239-e3c8-44d1-9a0e-7513f00c1414 container client-container: 
STEP: delete the pod
Aug 11 12:44:52.218: INFO: Waiting for pod downwardapi-volume-33f74239-e3c8-44d1-9a0e-7513f00c1414 to disappear
Aug 11 12:44:52.221: INFO: Pod downwardapi-volume-33f74239-e3c8-44d1-9a0e-7513f00c1414 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:44:52.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4585" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":173,"skipped":2773,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:44:52.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 11 12:44:52.309: INFO: Waiting up to 5m0s for pod "downwardapi-volume-13bf7ba9-fc48-43ed-b80e-5c6fe6adaff7" in namespace "downward-api-5723" to be "Succeeded or Failed"
Aug 11 12:44:52.313: INFO: Pod "downwardapi-volume-13bf7ba9-fc48-43ed-b80e-5c6fe6adaff7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.831588ms
Aug 11 12:44:54.326: INFO: Pod "downwardapi-volume-13bf7ba9-fc48-43ed-b80e-5c6fe6adaff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016420694s
Aug 11 12:44:56.330: INFO: Pod "downwardapi-volume-13bf7ba9-fc48-43ed-b80e-5c6fe6adaff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020669845s
STEP: Saw pod success
Aug 11 12:44:56.330: INFO: Pod "downwardapi-volume-13bf7ba9-fc48-43ed-b80e-5c6fe6adaff7" satisfied condition "Succeeded or Failed"
Aug 11 12:44:56.333: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-13bf7ba9-fc48-43ed-b80e-5c6fe6adaff7 container client-container: 
STEP: delete the pod
Aug 11 12:44:56.584: INFO: Waiting for pod downwardapi-volume-13bf7ba9-fc48-43ed-b80e-5c6fe6adaff7 to disappear
Aug 11 12:44:56.586: INFO: Pod downwardapi-volume-13bf7ba9-fc48-43ed-b80e-5c6fe6adaff7 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:44:56.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5723" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":174,"skipped":2809,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:44:56.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 12:44:56.943: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Aug 11 12:44:57.050: INFO: Pod name sample-pod: Found 0 pods out of 1
Aug 11 12:45:02.062: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 11 12:45:02.063: INFO: Creating deployment "test-rolling-update-deployment"
Aug 11 12:45:02.068: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Aug 11 12:45:02.083: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Aug 11 12:45:04.091: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Aug 11 12:45:04.093: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732746702, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732746702, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732746702, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732746702, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-59d5cb45c7\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:45:06.194: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Aug 11 12:45:06.204: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-8986 /apis/apps/v1/namespaces/deployment-8986/deployments/test-rolling-update-deployment d203f5bb-e3ac-409e-9b9c-9e699bc89716 8566941 1 2020-08-11 12:45:02 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  [{e2e.test Update apps/v1 2020-08-11 12:45:02 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-11 12:45:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004feeed8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-11 12:45:02 +0000 UTC,LastTransitionTime:2020-08-11 12:45:02 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-59d5cb45c7" has successfully progressed.,LastUpdateTime:2020-08-11 12:45:05 +0000 UTC,LastTransitionTime:2020-08-11 12:45:02 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Aug 11 12:45:06.207: INFO: New ReplicaSet "test-rolling-update-deployment-59d5cb45c7" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7  deployment-8986 /apis/apps/v1/namespaces/deployment-8986/replicasets/test-rolling-update-deployment-59d5cb45c7 8fa423f4-1cad-4fa6-ba6b-7c332133d688 8566928 1 2020-08-11 12:45:02 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment d203f5bb-e3ac-409e-9b9c-9e699bc89716 0xc004fef447 0xc004fef448}] []  [{kube-controller-manager Update apps/v1 2020-08-11 12:45:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 50 48 51 102 53 98 98 45 101 51 97 99 45 52 48 57 101 45 57 98 57 99 45 57 101 54 57 57 98 99 56 57 55 49 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 59d5cb45c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004fef4d8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 11 12:45:06.207: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Aug 11 12:45:06.207: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-8986 /apis/apps/v1/namespaces/deployment-8986/replicasets/test-rolling-update-controller 746f9d03-826f-447b-810b-4fa906612b29 8566939 2 2020-08-11 12:44:56 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment d203f5bb-e3ac-409e-9b9c-9e699bc89716 0xc004fef327 0xc004fef328}] []  [{e2e.test Update apps/v1 2020-08-11 12:44:56 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-11 12:45:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 50 48 51 102 53 98 98 45 101 51 97 99 45 52 48 57 101 45 57 98 57 99 45 57 101 54 57 57 98 99 56 57 55 49 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004fef3d8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 11 12:45:06.211: INFO: Pod "test-rolling-update-deployment-59d5cb45c7-h9gb2" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7-h9gb2 test-rolling-update-deployment-59d5cb45c7- deployment-8986 /api/v1/namespaces/deployment-8986/pods/test-rolling-update-deployment-59d5cb45c7-h9gb2 406246ff-dc90-476d-a932-ebe3a6d88e53 8566927 0 2020-08-11 12:45:02 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-59d5cb45c7 8fa423f4-1cad-4fa6-ba6b-7c332133d688 0xc003a5a517 0xc003a5a518}] []  [{kube-controller-manager Update v1 2020-08-11 12:45:02 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 102 97 52 50 51 102 52 45 49 99 97 100 45 52 102 97 54 45 98 97 54 98 45 55 99 51 51 50 49 51 51 100 54 56 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 12:45:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 49 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nwvp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nwvp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nwvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 12:45:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 12:45:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 12:45:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 12:45:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.118,StartTime:2020-08-11 12:45:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-11 12:45:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://e2fbafe164a588676ea88526ac640741ba438337e979d50a777b81a8bd1e1ca4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.118,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:45:06.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8986" for this suite.

• [SLOW TEST:9.625 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":175,"skipped":2833,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:45:06.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 11 12:45:06.393: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b135ba5-9afa-49a3-a007-f5302fd941d0" in namespace "projected-4627" to be "Succeeded or Failed"
Aug 11 12:45:06.408: INFO: Pod "downwardapi-volume-7b135ba5-9afa-49a3-a007-f5302fd941d0": Phase="Pending", Reason="", readiness=false. Elapsed: 15.81415ms
Aug 11 12:45:08.414: INFO: Pod "downwardapi-volume-7b135ba5-9afa-49a3-a007-f5302fd941d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020836672s
Aug 11 12:45:10.417: INFO: Pod "downwardapi-volume-7b135ba5-9afa-49a3-a007-f5302fd941d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024525456s
STEP: Saw pod success
Aug 11 12:45:10.417: INFO: Pod "downwardapi-volume-7b135ba5-9afa-49a3-a007-f5302fd941d0" satisfied condition "Succeeded or Failed"
Aug 11 12:45:10.420: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-7b135ba5-9afa-49a3-a007-f5302fd941d0 container client-container: 
STEP: delete the pod
Aug 11 12:45:10.478: INFO: Waiting for pod downwardapi-volume-7b135ba5-9afa-49a3-a007-f5302fd941d0 to disappear
Aug 11 12:45:10.486: INFO: Pod downwardapi-volume-7b135ba5-9afa-49a3-a007-f5302fd941d0 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:45:10.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4627" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":176,"skipped":2869,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:45:10.494: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Starting the proxy
Aug 11 12:45:10.548: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix147918680/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:45:10.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1011" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":275,"completed":177,"skipped":2880,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:45:10.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 11 12:45:10.717: INFO: Waiting up to 5m0s for pod "pod-82323337-fc6e-47ce-afc4-5ae13a5a8ab6" in namespace "emptydir-1509" to be "Succeeded or Failed"
Aug 11 12:45:10.751: INFO: Pod "pod-82323337-fc6e-47ce-afc4-5ae13a5a8ab6": Phase="Pending", Reason="", readiness=false. Elapsed: 33.674499ms
Aug 11 12:45:12.756: INFO: Pod "pod-82323337-fc6e-47ce-afc4-5ae13a5a8ab6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038434426s
Aug 11 12:45:14.760: INFO: Pod "pod-82323337-fc6e-47ce-afc4-5ae13a5a8ab6": Phase="Running", Reason="", readiness=true. Elapsed: 4.042719362s
Aug 11 12:45:16.765: INFO: Pod "pod-82323337-fc6e-47ce-afc4-5ae13a5a8ab6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.047395352s
STEP: Saw pod success
Aug 11 12:45:16.765: INFO: Pod "pod-82323337-fc6e-47ce-afc4-5ae13a5a8ab6" satisfied condition "Succeeded or Failed"
Aug 11 12:45:16.768: INFO: Trying to get logs from node kali-worker2 pod pod-82323337-fc6e-47ce-afc4-5ae13a5a8ab6 container test-container: 
STEP: delete the pod
Aug 11 12:45:16.801: INFO: Waiting for pod pod-82323337-fc6e-47ce-afc4-5ae13a5a8ab6 to disappear
Aug 11 12:45:16.805: INFO: Pod pod-82323337-fc6e-47ce-afc4-5ae13a5a8ab6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:45:16.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1509" for this suite.

• [SLOW TEST:6.201 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":178,"skipped":2893,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:45:16.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0811 12:45:29.639038       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 11 12:45:29.639: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:45:29.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6396" for this suite.

• [SLOW TEST:13.299 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":179,"skipped":2897,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:45:30.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:45:30.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8211" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":180,"skipped":2907,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:45:30.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Aug 11 12:45:30.584: INFO: Pod name pod-release: Found 0 pods out of 1
Aug 11 12:45:35.638: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:45:35.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3614" for this suite.

• [SLOW TEST:5.624 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":181,"skipped":2926,"failed":0}
SSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:45:36.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod busybox-ed65aae9-681c-4e11-90a7-eff6107060ed in namespace container-probe-4307
Aug 11 12:45:43.560: INFO: Started pod busybox-ed65aae9-681c-4e11-90a7-eff6107060ed in namespace container-probe-4307
STEP: checking the pod's current state and verifying that restartCount is present
Aug 11 12:45:43.563: INFO: Initial restart count of pod busybox-ed65aae9-681c-4e11-90a7-eff6107060ed is 0
Aug 11 12:46:36.382: INFO: Restart count of pod container-probe-4307/busybox-ed65aae9-681c-4e11-90a7-eff6107060ed is now 1 (52.818594717s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:46:36.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4307" for this suite.

• [SLOW TEST:60.346 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":182,"skipped":2929,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:46:36.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:46:40.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3428" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":183,"skipped":2938,"failed":0}

------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:46:40.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-8933
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 11 12:46:40.670: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Aug 11 12:46:40.711: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 11 12:46:42.716: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 11 12:46:45.034: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 11 12:46:46.716: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 11 12:46:48.716: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 11 12:46:50.716: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 11 12:46:52.716: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 11 12:46:54.716: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 11 12:46:56.717: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 11 12:46:58.716: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 11 12:47:00.716: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 11 12:47:02.715: INFO: The status of Pod netserver-0 is Running (Ready = true)
Aug 11 12:47:02.719: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug 11 12:47:04.723: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Aug 11 12:47:10.800: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.138:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8933 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 11 12:47:10.800: INFO: >>> kubeConfig: /root/.kube/config
I0811 12:47:10.833293       7 log.go:172] (0xc002da2370) (0xc000b5a320) Create stream
I0811 12:47:10.833322       7 log.go:172] (0xc002da2370) (0xc000b5a320) Stream added, broadcasting: 1
I0811 12:47:10.835326       7 log.go:172] (0xc002da2370) Reply frame received for 1
I0811 12:47:10.835367       7 log.go:172] (0xc002da2370) (0xc00148f400) Create stream
I0811 12:47:10.835382       7 log.go:172] (0xc002da2370) (0xc00148f400) Stream added, broadcasting: 3
I0811 12:47:10.836374       7 log.go:172] (0xc002da2370) Reply frame received for 3
I0811 12:47:10.836417       7 log.go:172] (0xc002da2370) (0xc0010fe0a0) Create stream
I0811 12:47:10.836433       7 log.go:172] (0xc002da2370) (0xc0010fe0a0) Stream added, broadcasting: 5
I0811 12:47:10.837529       7 log.go:172] (0xc002da2370) Reply frame received for 5
I0811 12:47:10.923559       7 log.go:172] (0xc002da2370) Data frame received for 5
I0811 12:47:10.923616       7 log.go:172] (0xc002da2370) Data frame received for 3
I0811 12:47:10.923670       7 log.go:172] (0xc00148f400) (3) Data frame handling
I0811 12:47:10.923689       7 log.go:172] (0xc00148f400) (3) Data frame sent
I0811 12:47:10.923761       7 log.go:172] (0xc002da2370) Data frame received for 3
I0811 12:47:10.923786       7 log.go:172] (0xc00148f400) (3) Data frame handling
I0811 12:47:10.923818       7 log.go:172] (0xc0010fe0a0) (5) Data frame handling
I0811 12:47:10.926030       7 log.go:172] (0xc002da2370) Data frame received for 1
I0811 12:47:10.926057       7 log.go:172] (0xc000b5a320) (1) Data frame handling
I0811 12:47:10.926079       7 log.go:172] (0xc000b5a320) (1) Data frame sent
I0811 12:47:10.926100       7 log.go:172] (0xc002da2370) (0xc000b5a320) Stream removed, broadcasting: 1
I0811 12:47:10.926122       7 log.go:172] (0xc002da2370) Go away received
I0811 12:47:10.926220       7 log.go:172] (0xc002da2370) (0xc000b5a320) Stream removed, broadcasting: 1
I0811 12:47:10.926242       7 log.go:172] (0xc002da2370) (0xc00148f400) Stream removed, broadcasting: 3
I0811 12:47:10.926262       7 log.go:172] (0xc002da2370) (0xc0010fe0a0) Stream removed, broadcasting: 5
Aug 11 12:47:10.926: INFO: Found all expected endpoints: [netserver-0]
Aug 11 12:47:10.929: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.128:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8933 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 11 12:47:10.929: INFO: >>> kubeConfig: /root/.kube/config
I0811 12:47:10.957130       7 log.go:172] (0xc0053102c0) (0xc0010fe6e0) Create stream
I0811 12:47:10.957160       7 log.go:172] (0xc0053102c0) (0xc0010fe6e0) Stream added, broadcasting: 1
I0811 12:47:10.961731       7 log.go:172] (0xc0053102c0) Reply frame received for 1
I0811 12:47:10.961834       7 log.go:172] (0xc0053102c0) (0xc0010feb40) Create stream
I0811 12:47:10.961910       7 log.go:172] (0xc0053102c0) (0xc0010feb40) Stream added, broadcasting: 3
I0811 12:47:10.966310       7 log.go:172] (0xc0053102c0) Reply frame received for 3
I0811 12:47:10.966347       7 log.go:172] (0xc0053102c0) (0xc000b5a3c0) Create stream
I0811 12:47:10.966360       7 log.go:172] (0xc0053102c0) (0xc000b5a3c0) Stream added, broadcasting: 5
I0811 12:47:10.967374       7 log.go:172] (0xc0053102c0) Reply frame received for 5
I0811 12:47:11.020428       7 log.go:172] (0xc0053102c0) Data frame received for 3
I0811 12:47:11.020474       7 log.go:172] (0xc0010feb40) (3) Data frame handling
I0811 12:47:11.020497       7 log.go:172] (0xc0010feb40) (3) Data frame sent
I0811 12:47:11.020616       7 log.go:172] (0xc0053102c0) Data frame received for 5
I0811 12:47:11.020640       7 log.go:172] (0xc000b5a3c0) (5) Data frame handling
I0811 12:47:11.020676       7 log.go:172] (0xc0053102c0) Data frame received for 3
I0811 12:47:11.020690       7 log.go:172] (0xc0010feb40) (3) Data frame handling
I0811 12:47:11.022542       7 log.go:172] (0xc0053102c0) Data frame received for 1
I0811 12:47:11.022577       7 log.go:172] (0xc0010fe6e0) (1) Data frame handling
I0811 12:47:11.022607       7 log.go:172] (0xc0010fe6e0) (1) Data frame sent
I0811 12:47:11.022639       7 log.go:172] (0xc0053102c0) (0xc0010fe6e0) Stream removed, broadcasting: 1
I0811 12:47:11.022795       7 log.go:172] (0xc0053102c0) (0xc0010fe6e0) Stream removed, broadcasting: 1
I0811 12:47:11.022821       7 log.go:172] (0xc0053102c0) (0xc0010feb40) Stream removed, broadcasting: 3
I0811 12:47:11.022853       7 log.go:172] (0xc0053102c0) Go away received
I0811 12:47:11.022921       7 log.go:172] (0xc0053102c0) (0xc000b5a3c0) Stream removed, broadcasting: 5
Aug 11 12:47:11.022: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:47:11.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8933" for this suite.

• [SLOW TEST:30.464 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":184,"skipped":2938,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:47:11.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-2311
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
Aug 11 12:47:11.181: INFO: Found 0 stateful pods, waiting for 3
Aug 11 12:47:21.187: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 11 12:47:21.187: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 11 12:47:21.187: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 11 12:47:31.186: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 11 12:47:31.186: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 11 12:47:31.186: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Aug 11 12:47:31.213: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Aug 11 12:47:41.300: INFO: Updating stateful set ss2
Aug 11 12:47:41.357: INFO: Waiting for Pod statefulset-2311/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Aug 11 12:47:52.277: INFO: Found 2 stateful pods, waiting for 3
Aug 11 12:48:02.283: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 11 12:48:02.283: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 11 12:48:02.283: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Aug 11 12:48:02.307: INFO: Updating stateful set ss2
Aug 11 12:48:02.342: INFO: Waiting for Pod statefulset-2311/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 11 12:48:12.353: INFO: Waiting for Pod statefulset-2311/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 11 12:48:22.369: INFO: Updating stateful set ss2
Aug 11 12:48:22.415: INFO: Waiting for StatefulSet statefulset-2311/ss2 to complete update
Aug 11 12:48:22.415: INFO: Waiting for Pod statefulset-2311/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 11 12:48:32.423: INFO: Waiting for StatefulSet statefulset-2311/ss2 to complete update
Aug 11 12:48:32.423: INFO: Waiting for Pod statefulset-2311/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug 11 12:48:42.424: INFO: Deleting all statefulset in ns statefulset-2311
Aug 11 12:48:42.427: INFO: Scaling statefulset ss2 to 0
Aug 11 12:49:02.450: INFO: Waiting for statefulset status.replicas updated to 0
Aug 11 12:49:02.454: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:49:02.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2311" for this suite.

• [SLOW TEST:111.445 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":185,"skipped":2950,"failed":0}
SSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:49:02.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 12:49:02.568: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:49:06.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3023" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":186,"skipped":2954,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:49:06.716: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-0cb3dfe5-7c5e-496f-8f69-e843659beb9f
STEP: Creating a pod to test consume secrets
Aug 11 12:49:06.898: INFO: Waiting up to 5m0s for pod "pod-secrets-c7cd26dd-b879-4712-a192-81a714accf92" in namespace "secrets-4204" to be "Succeeded or Failed"
Aug 11 12:49:06.952: INFO: Pod "pod-secrets-c7cd26dd-b879-4712-a192-81a714accf92": Phase="Pending", Reason="", readiness=false. Elapsed: 54.01033ms
Aug 11 12:49:08.973: INFO: Pod "pod-secrets-c7cd26dd-b879-4712-a192-81a714accf92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075488421s
Aug 11 12:49:10.977: INFO: Pod "pod-secrets-c7cd26dd-b879-4712-a192-81a714accf92": Phase="Running", Reason="", readiness=true. Elapsed: 4.079289531s
Aug 11 12:49:12.981: INFO: Pod "pod-secrets-c7cd26dd-b879-4712-a192-81a714accf92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.083368093s
STEP: Saw pod success
Aug 11 12:49:12.981: INFO: Pod "pod-secrets-c7cd26dd-b879-4712-a192-81a714accf92" satisfied condition "Succeeded or Failed"
Aug 11 12:49:12.984: INFO: Trying to get logs from node kali-worker pod pod-secrets-c7cd26dd-b879-4712-a192-81a714accf92 container secret-volume-test: 
STEP: delete the pod
Aug 11 12:49:13.023: INFO: Waiting for pod pod-secrets-c7cd26dd-b879-4712-a192-81a714accf92 to disappear
Aug 11 12:49:13.026: INFO: Pod pod-secrets-c7cd26dd-b879-4712-a192-81a714accf92 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:49:13.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4204" for this suite.

• [SLOW TEST:6.318 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":187,"skipped":2971,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:49:13.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 11 12:49:13.146: INFO: Waiting up to 5m0s for pod "downwardapi-volume-23c510b4-36c6-46ff-a103-33fe8366ce22" in namespace "projected-3139" to be "Succeeded or Failed"
Aug 11 12:49:13.220: INFO: Pod "downwardapi-volume-23c510b4-36c6-46ff-a103-33fe8366ce22": Phase="Pending", Reason="", readiness=false. Elapsed: 74.21907ms
Aug 11 12:49:15.224: INFO: Pod "downwardapi-volume-23c510b4-36c6-46ff-a103-33fe8366ce22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077913915s
Aug 11 12:49:17.229: INFO: Pod "downwardapi-volume-23c510b4-36c6-46ff-a103-33fe8366ce22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.083044105s
STEP: Saw pod success
Aug 11 12:49:17.229: INFO: Pod "downwardapi-volume-23c510b4-36c6-46ff-a103-33fe8366ce22" satisfied condition "Succeeded or Failed"
Aug 11 12:49:17.232: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-23c510b4-36c6-46ff-a103-33fe8366ce22 container client-container: 
STEP: delete the pod
Aug 11 12:49:17.315: INFO: Waiting for pod downwardapi-volume-23c510b4-36c6-46ff-a103-33fe8366ce22 to disappear
Aug 11 12:49:17.320: INFO: Pod downwardapi-volume-23c510b4-36c6-46ff-a103-33fe8366ce22 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:49:17.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3139" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":188,"skipped":2973,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:49:17.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 12:49:17.429: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-d07e0eca-6c2a-4e0b-8f98-c5f954c18151" in namespace "security-context-test-5840" to be "Succeeded or Failed"
Aug 11 12:49:17.446: INFO: Pod "busybox-privileged-false-d07e0eca-6c2a-4e0b-8f98-c5f954c18151": Phase="Pending", Reason="", readiness=false. Elapsed: 16.254352ms
Aug 11 12:49:19.623: INFO: Pod "busybox-privileged-false-d07e0eca-6c2a-4e0b-8f98-c5f954c18151": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193908629s
Aug 11 12:49:21.627: INFO: Pod "busybox-privileged-false-d07e0eca-6c2a-4e0b-8f98-c5f954c18151": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.197705351s
Aug 11 12:49:21.627: INFO: Pod "busybox-privileged-false-d07e0eca-6c2a-4e0b-8f98-c5f954c18151" satisfied condition "Succeeded or Failed"
Aug 11 12:49:21.634: INFO: Got logs for pod "busybox-privileged-false-d07e0eca-6c2a-4e0b-8f98-c5f954c18151": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:49:21.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5840" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":189,"skipped":2991,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:49:21.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 11 12:49:22.584: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 11 12:49:24.596: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732746962, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732746962, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732746962, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732746962, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 11 12:49:27.626: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:49:27.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-107" for this suite.
STEP: Destroying namespace "webhook-107-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.217 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":190,"skipped":2992,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:49:27.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 11 12:49:28.023: INFO: Waiting up to 5m0s for pod "pod-0dc648cb-a7b8-46e3-bd7a-3d2f2ba9948a" in namespace "emptydir-4456" to be "Succeeded or Failed"
Aug 11 12:49:28.027: INFO: Pod "pod-0dc648cb-a7b8-46e3-bd7a-3d2f2ba9948a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.225709ms
Aug 11 12:49:30.032: INFO: Pod "pod-0dc648cb-a7b8-46e3-bd7a-3d2f2ba9948a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008649012s
Aug 11 12:49:32.035: INFO: Pod "pod-0dc648cb-a7b8-46e3-bd7a-3d2f2ba9948a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012567457s
STEP: Saw pod success
Aug 11 12:49:32.035: INFO: Pod "pod-0dc648cb-a7b8-46e3-bd7a-3d2f2ba9948a" satisfied condition "Succeeded or Failed"
Aug 11 12:49:32.038: INFO: Trying to get logs from node kali-worker pod pod-0dc648cb-a7b8-46e3-bd7a-3d2f2ba9948a container test-container: 
STEP: delete the pod
Aug 11 12:49:32.077: INFO: Waiting for pod pod-0dc648cb-a7b8-46e3-bd7a-3d2f2ba9948a to disappear
Aug 11 12:49:32.081: INFO: Pod pod-0dc648cb-a7b8-46e3-bd7a-3d2f2ba9948a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:49:32.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4456" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":191,"skipped":3040,"failed":0}
SSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:49:32.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-7457
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-7457
I0811 12:49:32.304131       7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7457, replica count: 2
I0811 12:49:35.354551       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0811 12:49:38.354807       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 11 12:49:38.354: INFO: Creating new exec pod
Aug 11 12:49:43.388: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-7457 execpodhpknc -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Aug 11 12:49:43.602: INFO: stderr: "I0811 12:49:43.511442    1634 log.go:172] (0xc0009b4000) (0xc00098e000) Create stream\nI0811 12:49:43.511500    1634 log.go:172] (0xc0009b4000) (0xc00098e000) Stream added, broadcasting: 1\nI0811 12:49:43.515100    1634 log.go:172] (0xc0009b4000) Reply frame received for 1\nI0811 12:49:43.515151    1634 log.go:172] (0xc0009b4000) (0xc00098e0a0) Create stream\nI0811 12:49:43.515165    1634 log.go:172] (0xc0009b4000) (0xc00098e0a0) Stream added, broadcasting: 3\nI0811 12:49:43.516054    1634 log.go:172] (0xc0009b4000) Reply frame received for 3\nI0811 12:49:43.516091    1634 log.go:172] (0xc0009b4000) (0xc000b14000) Create stream\nI0811 12:49:43.516101    1634 log.go:172] (0xc0009b4000) (0xc000b14000) Stream added, broadcasting: 5\nI0811 12:49:43.517179    1634 log.go:172] (0xc0009b4000) Reply frame received for 5\nI0811 12:49:43.594272    1634 log.go:172] (0xc0009b4000) Data frame received for 3\nI0811 12:49:43.594300    1634 log.go:172] (0xc00098e0a0) (3) Data frame handling\nI0811 12:49:43.594317    1634 log.go:172] (0xc0009b4000) Data frame received for 5\nI0811 12:49:43.594323    1634 log.go:172] (0xc000b14000) (5) Data frame handling\nI0811 12:49:43.594333    1634 log.go:172] (0xc000b14000) (5) Data frame sent\nI0811 12:49:43.594340    1634 log.go:172] (0xc0009b4000) Data frame received for 5\nI0811 12:49:43.594345    1634 log.go:172] (0xc000b14000) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0811 12:49:43.595976    1634 log.go:172] (0xc0009b4000) Data frame received for 1\nI0811 12:49:43.596009    1634 log.go:172] (0xc00098e000) (1) Data frame handling\nI0811 12:49:43.596037    1634 log.go:172] (0xc00098e000) (1) Data frame sent\nI0811 12:49:43.596059    1634 log.go:172] (0xc0009b4000) (0xc00098e000) Stream removed, broadcasting: 1\nI0811 12:49:43.596382    1634 log.go:172] (0xc0009b4000) Go away received\nI0811 12:49:43.596507    1634 log.go:172] (0xc0009b4000) (0xc00098e000) Stream removed, broadcasting: 1\nI0811 12:49:43.596528    1634 log.go:172] (0xc0009b4000) (0xc00098e0a0) Stream removed, broadcasting: 3\nI0811 12:49:43.596544    1634 log.go:172] (0xc0009b4000) (0xc000b14000) Stream removed, broadcasting: 5\n"
Aug 11 12:49:43.603: INFO: stdout: ""
Aug 11 12:49:43.604: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-7457 execpodhpknc -- /bin/sh -x -c nc -zv -t -w 2 10.105.123.8 80'
Aug 11 12:49:43.822: INFO: stderr: "I0811 12:49:43.739077    1655 log.go:172] (0xc000622a50) (0xc0006254a0) Create stream\nI0811 12:49:43.739144    1655 log.go:172] (0xc000622a50) (0xc0006254a0) Stream added, broadcasting: 1\nI0811 12:49:43.742768    1655 log.go:172] (0xc000622a50) Reply frame received for 1\nI0811 12:49:43.742848    1655 log.go:172] (0xc000622a50) (0xc0004d0000) Create stream\nI0811 12:49:43.742880    1655 log.go:172] (0xc000622a50) (0xc0004d0000) Stream added, broadcasting: 3\nI0811 12:49:43.743741    1655 log.go:172] (0xc000622a50) Reply frame received for 3\nI0811 12:49:43.743786    1655 log.go:172] (0xc000622a50) (0xc0004d00a0) Create stream\nI0811 12:49:43.743801    1655 log.go:172] (0xc000622a50) (0xc0004d00a0) Stream added, broadcasting: 5\nI0811 12:49:43.744913    1655 log.go:172] (0xc000622a50) Reply frame received for 5\nI0811 12:49:43.814917    1655 log.go:172] (0xc000622a50) Data frame received for 3\nI0811 12:49:43.814946    1655 log.go:172] (0xc0004d0000) (3) Data frame handling\nI0811 12:49:43.814965    1655 log.go:172] (0xc000622a50) Data frame received for 5\nI0811 12:49:43.814972    1655 log.go:172] (0xc0004d00a0) (5) Data frame handling\nI0811 12:49:43.814980    1655 log.go:172] (0xc0004d00a0) (5) Data frame sent\nI0811 12:49:43.814987    1655 log.go:172] (0xc000622a50) Data frame received for 5\nI0811 12:49:43.814993    1655 log.go:172] (0xc0004d00a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.123.8 80\nConnection to 10.105.123.8 80 port [tcp/http] succeeded!\nI0811 12:49:43.816271    1655 log.go:172] (0xc000622a50) Data frame received for 1\nI0811 12:49:43.816296    1655 log.go:172] (0xc0006254a0) (1) Data frame handling\nI0811 12:49:43.816307    1655 log.go:172] (0xc0006254a0) (1) Data frame sent\nI0811 12:49:43.816319    1655 log.go:172] (0xc000622a50) (0xc0006254a0) Stream removed, broadcasting: 1\nI0811 12:49:43.816433    1655 log.go:172] (0xc000622a50) Go away received\nI0811 12:49:43.816608    1655 log.go:172] (0xc000622a50) (0xc0006254a0) Stream removed, broadcasting: 1\nI0811 12:49:43.816625    1655 log.go:172] (0xc000622a50) (0xc0004d0000) Stream removed, broadcasting: 3\nI0811 12:49:43.816633    1655 log.go:172] (0xc000622a50) (0xc0004d00a0) Stream removed, broadcasting: 5\n"
Aug 11 12:49:43.822: INFO: stdout: ""
Aug 11 12:49:43.822: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:49:43.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7457" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:11.801 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":192,"skipped":3044,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:49:43.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:49:51.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-4798" for this suite.
STEP: Destroying namespace "nsdeletetest-10" for this suite.
Aug 11 12:49:51.471: INFO: Namespace nsdeletetest-10 was already deleted
STEP: Destroying namespace "nsdeletetest-8245" for this suite.

• [SLOW TEST:7.584 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":193,"skipped":3094,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:49:51.475: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:49:51.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5138" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":194,"skipped":3122,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:49:52.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Aug 11 12:49:52.145: INFO: Waiting up to 5m0s for pod "downward-api-75c256ed-1c75-41ae-8ee3-d8a9b712a2ae" in namespace "downward-api-2686" to be "Succeeded or Failed"
Aug 11 12:49:52.154: INFO: Pod "downward-api-75c256ed-1c75-41ae-8ee3-d8a9b712a2ae": Phase="Pending", Reason="", readiness=false. Elapsed: 9.679223ms
Aug 11 12:49:54.158: INFO: Pod "downward-api-75c256ed-1c75-41ae-8ee3-d8a9b712a2ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013739999s
Aug 11 12:49:56.163: INFO: Pod "downward-api-75c256ed-1c75-41ae-8ee3-d8a9b712a2ae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018361113s
Aug 11 12:49:58.168: INFO: Pod "downward-api-75c256ed-1c75-41ae-8ee3-d8a9b712a2ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022866326s
STEP: Saw pod success
Aug 11 12:49:58.168: INFO: Pod "downward-api-75c256ed-1c75-41ae-8ee3-d8a9b712a2ae" satisfied condition "Succeeded or Failed"
Aug 11 12:49:58.171: INFO: Trying to get logs from node kali-worker2 pod downward-api-75c256ed-1c75-41ae-8ee3-d8a9b712a2ae container dapi-container: 
STEP: delete the pod
Aug 11 12:49:58.191: INFO: Waiting for pod downward-api-75c256ed-1c75-41ae-8ee3-d8a9b712a2ae to disappear
Aug 11 12:49:58.196: INFO: Pod downward-api-75c256ed-1c75-41ae-8ee3-d8a9b712a2ae no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:49:58.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2686" for this suite.

• [SLOW TEST:6.196 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":195,"skipped":3188,"failed":0}
SSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:49:58.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 12:49:58.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 11 12:50:00.361: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6641 create -f -'
Aug 11 12:50:03.644: INFO: stderr: ""
Aug 11 12:50:03.644: INFO: stdout: "e2e-test-crd-publish-openapi-5317-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Aug 11 12:50:03.645: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6641 delete e2e-test-crd-publish-openapi-5317-crds test-cr'
Aug 11 12:50:03.742: INFO: stderr: ""
Aug 11 12:50:03.742: INFO: stdout: "e2e-test-crd-publish-openapi-5317-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Aug 11 12:50:03.742: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6641 apply -f -'
Aug 11 12:50:04.002: INFO: stderr: ""
Aug 11 12:50:04.002: INFO: stdout: "e2e-test-crd-publish-openapi-5317-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Aug 11 12:50:04.002: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6641 delete e2e-test-crd-publish-openapi-5317-crds test-cr'
Aug 11 12:50:04.111: INFO: stderr: ""
Aug 11 12:50:04.111: INFO: stdout: "e2e-test-crd-publish-openapi-5317-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Aug 11 12:50:04.111: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5317-crds'
Aug 11 12:50:04.339: INFO: stderr: ""
Aug 11 12:50:04.339: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5317-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:50:07.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6641" for this suite.

• [SLOW TEST:9.057 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":196,"skipped":3191,"failed":0}
SSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:50:07.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Aug 11 12:50:13.843: INFO: Successfully updated pod "adopt-release-76wvb"
STEP: Checking that the Job readopts the Pod
Aug 11 12:50:13.843: INFO: Waiting up to 15m0s for pod "adopt-release-76wvb" in namespace "job-5190" to be "adopted"
Aug 11 12:50:13.871: INFO: Pod "adopt-release-76wvb": Phase="Running", Reason="", readiness=true. Elapsed: 28.634874ms
Aug 11 12:50:15.875: INFO: Pod "adopt-release-76wvb": Phase="Running", Reason="", readiness=true. Elapsed: 2.032669311s
Aug 11 12:50:15.875: INFO: Pod "adopt-release-76wvb" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Aug 11 12:50:16.386: INFO: Successfully updated pod "adopt-release-76wvb"
STEP: Checking that the Job releases the Pod
Aug 11 12:50:16.386: INFO: Waiting up to 15m0s for pod "adopt-release-76wvb" in namespace "job-5190" to be "released"
Aug 11 12:50:16.402: INFO: Pod "adopt-release-76wvb": Phase="Running", Reason="", readiness=true. Elapsed: 15.9261ms
Aug 11 12:50:18.418: INFO: Pod "adopt-release-76wvb": Phase="Running", Reason="", readiness=true. Elapsed: 2.032042484s
Aug 11 12:50:18.418: INFO: Pod "adopt-release-76wvb" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:50:18.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5190" for this suite.

• [SLOW TEST:11.167 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":197,"skipped":3195,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:50:18.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 11 12:50:19.361: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 11 12:50:21.484: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747019, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747019, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747019, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747019, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 11 12:50:24.592: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 12:50:24.596: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:50:25.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8389" for this suite.
STEP: Destroying namespace "webhook-8389-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.458 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":198,"skipped":3197,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:50:25.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 11 12:50:26.044: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6deb941d-7a20-4e7c-8c1f-a108dbf1b4e5" in namespace "downward-api-7722" to be "Succeeded or Failed"
Aug 11 12:50:26.389: INFO: Pod "downwardapi-volume-6deb941d-7a20-4e7c-8c1f-a108dbf1b4e5": Phase="Pending", Reason="", readiness=false. Elapsed: 345.04263ms
Aug 11 12:50:28.394: INFO: Pod "downwardapi-volume-6deb941d-7a20-4e7c-8c1f-a108dbf1b4e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.349608935s
Aug 11 12:50:30.398: INFO: Pod "downwardapi-volume-6deb941d-7a20-4e7c-8c1f-a108dbf1b4e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.353957053s
STEP: Saw pod success
Aug 11 12:50:30.398: INFO: Pod "downwardapi-volume-6deb941d-7a20-4e7c-8c1f-a108dbf1b4e5" satisfied condition "Succeeded or Failed"
Aug 11 12:50:30.402: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-6deb941d-7a20-4e7c-8c1f-a108dbf1b4e5 container client-container: 
STEP: delete the pod
Aug 11 12:50:30.437: INFO: Waiting for pod downwardapi-volume-6deb941d-7a20-4e7c-8c1f-a108dbf1b4e5 to disappear
Aug 11 12:50:30.449: INFO: Pod downwardapi-volume-6deb941d-7a20-4e7c-8c1f-a108dbf1b4e5 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:50:30.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7722" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":199,"skipped":3215,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:50:30.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a replication controller
Aug 11 12:50:30.527: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3626'
Aug 11 12:50:30.827: INFO: stderr: ""
Aug 11 12:50:30.827: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 11 12:50:30.827: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3626'
Aug 11 12:50:31.043: INFO: stderr: ""
Aug 11 12:50:31.043: INFO: stdout: "update-demo-nautilus-kbmvr update-demo-nautilus-wdcrr "
Aug 11 12:50:31.043: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kbmvr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3626'
Aug 11 12:50:31.167: INFO: stderr: ""
Aug 11 12:50:31.167: INFO: stdout: ""
Aug 11 12:50:31.167: INFO: update-demo-nautilus-kbmvr is created but not running
Aug 11 12:50:36.167: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3626'
Aug 11 12:50:36.259: INFO: stderr: ""
Aug 11 12:50:36.259: INFO: stdout: "update-demo-nautilus-kbmvr update-demo-nautilus-wdcrr "
Aug 11 12:50:36.259: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kbmvr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3626'
Aug 11 12:50:36.354: INFO: stderr: ""
Aug 11 12:50:36.354: INFO: stdout: "true"
Aug 11 12:50:36.354: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kbmvr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3626'
Aug 11 12:50:36.446: INFO: stderr: ""
Aug 11 12:50:36.446: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 11 12:50:36.446: INFO: validating pod update-demo-nautilus-kbmvr
Aug 11 12:50:36.450: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 11 12:50:36.450: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 11 12:50:36.450: INFO: update-demo-nautilus-kbmvr is verified up and running
Aug 11 12:50:36.450: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wdcrr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3626'
Aug 11 12:50:36.546: INFO: stderr: ""
Aug 11 12:50:36.546: INFO: stdout: "true"
Aug 11 12:50:36.546: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wdcrr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3626'
Aug 11 12:50:36.657: INFO: stderr: ""
Aug 11 12:50:36.657: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 11 12:50:36.657: INFO: validating pod update-demo-nautilus-wdcrr
Aug 11 12:50:36.661: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 11 12:50:36.661: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 11 12:50:36.661: INFO: update-demo-nautilus-wdcrr is verified up and running
STEP: scaling down the replication controller
Aug 11 12:50:36.664: INFO: scanned /root for discovery docs: 
Aug 11 12:50:36.664: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-3626'
Aug 11 12:50:37.827: INFO: stderr: ""
Aug 11 12:50:37.827: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 11 12:50:37.827: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3626'
Aug 11 12:50:37.941: INFO: stderr: ""
Aug 11 12:50:37.941: INFO: stdout: "update-demo-nautilus-kbmvr update-demo-nautilus-wdcrr "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug 11 12:50:42.941: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3626'
Aug 11 12:50:43.057: INFO: stderr: ""
Aug 11 12:50:43.057: INFO: stdout: "update-demo-nautilus-kbmvr update-demo-nautilus-wdcrr "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug 11 12:50:48.058: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3626'
Aug 11 12:50:48.162: INFO: stderr: ""
Aug 11 12:50:48.163: INFO: stdout: "update-demo-nautilus-wdcrr "
Aug 11 12:50:48.163: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wdcrr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3626'
Aug 11 12:50:48.260: INFO: stderr: ""
Aug 11 12:50:48.260: INFO: stdout: "true"
Aug 11 12:50:48.260: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wdcrr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3626'
Aug 11 12:50:48.352: INFO: stderr: ""
Aug 11 12:50:48.352: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 11 12:50:48.352: INFO: validating pod update-demo-nautilus-wdcrr
Aug 11 12:50:48.356: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 11 12:50:48.356: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 11 12:50:48.356: INFO: update-demo-nautilus-wdcrr is verified up and running
STEP: scaling up the replication controller
Aug 11 12:50:48.358: INFO: scanned /root for discovery docs: 
Aug 11 12:50:48.358: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-3626'
Aug 11 12:50:49.517: INFO: stderr: ""
Aug 11 12:50:49.517: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 11 12:50:49.517: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3626'
Aug 11 12:50:49.619: INFO: stderr: ""
Aug 11 12:50:49.619: INFO: stdout: "update-demo-nautilus-7n2dn update-demo-nautilus-wdcrr "
Aug 11 12:50:49.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7n2dn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3626'
Aug 11 12:50:49.708: INFO: stderr: ""
Aug 11 12:50:49.708: INFO: stdout: ""
Aug 11 12:50:49.708: INFO: update-demo-nautilus-7n2dn is created but not running
Aug 11 12:50:54.709: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3626'
Aug 11 12:50:54.804: INFO: stderr: ""
Aug 11 12:50:54.804: INFO: stdout: "update-demo-nautilus-7n2dn update-demo-nautilus-wdcrr "
Aug 11 12:50:54.804: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7n2dn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3626'
Aug 11 12:50:54.959: INFO: stderr: ""
Aug 11 12:50:54.959: INFO: stdout: "true"
Aug 11 12:50:54.959: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7n2dn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3626'
Aug 11 12:50:55.055: INFO: stderr: ""
Aug 11 12:50:55.055: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 11 12:50:55.055: INFO: validating pod update-demo-nautilus-7n2dn
Aug 11 12:50:55.059: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 11 12:50:55.059: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 11 12:50:55.059: INFO: update-demo-nautilus-7n2dn is verified up and running
Aug 11 12:50:55.059: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wdcrr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3626'
Aug 11 12:50:55.155: INFO: stderr: ""
Aug 11 12:50:55.155: INFO: stdout: "true"
Aug 11 12:50:55.155: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wdcrr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3626'
Aug 11 12:50:55.238: INFO: stderr: ""
Aug 11 12:50:55.238: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 11 12:50:55.238: INFO: validating pod update-demo-nautilus-wdcrr
Aug 11 12:50:55.241: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 11 12:50:55.241: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 11 12:50:55.241: INFO: update-demo-nautilus-wdcrr is verified up and running
STEP: using delete to clean up resources
Aug 11 12:50:55.241: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3626'
Aug 11 12:50:55.341: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 11 12:50:55.341: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 11 12:50:55.341: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3626'
Aug 11 12:50:55.443: INFO: stderr: "No resources found in kubectl-3626 namespace.\n"
Aug 11 12:50:55.443: INFO: stdout: ""
Aug 11 12:50:55.443: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3626 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 11 12:50:55.557: INFO: stderr: ""
Aug 11 12:50:55.557: INFO: stdout: "update-demo-nautilus-7n2dn\nupdate-demo-nautilus-wdcrr\n"
Aug 11 12:50:56.057: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3626'
Aug 11 12:50:56.153: INFO: stderr: "No resources found in kubectl-3626 namespace.\n"
Aug 11 12:50:56.153: INFO: stdout: ""
Aug 11 12:50:56.153: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3626 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 11 12:50:56.249: INFO: stderr: ""
Aug 11 12:50:56.249: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:50:56.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3626" for this suite.

• [SLOW TEST:25.777 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":275,"completed":200,"skipped":3220,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:50:56.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-d0aa73a7-a285-4e85-a6cf-c6314156d418
STEP: Creating a pod to test consume configMaps
Aug 11 12:50:56.810: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-13c56baa-c3bc-4e55-98c7-9487ce595560" in namespace "projected-4889" to be "Succeeded or Failed"
Aug 11 12:50:56.876: INFO: Pod "pod-projected-configmaps-13c56baa-c3bc-4e55-98c7-9487ce595560": Phase="Pending", Reason="", readiness=false. Elapsed: 65.968347ms
Aug 11 12:50:58.880: INFO: Pod "pod-projected-configmaps-13c56baa-c3bc-4e55-98c7-9487ce595560": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070113245s
Aug 11 12:51:00.886: INFO: Pod "pod-projected-configmaps-13c56baa-c3bc-4e55-98c7-9487ce595560": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07562729s
STEP: Saw pod success
Aug 11 12:51:00.886: INFO: Pod "pod-projected-configmaps-13c56baa-c3bc-4e55-98c7-9487ce595560" satisfied condition "Succeeded or Failed"
Aug 11 12:51:00.889: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-13c56baa-c3bc-4e55-98c7-9487ce595560 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 11 12:51:00.906: INFO: Waiting for pod pod-projected-configmaps-13c56baa-c3bc-4e55-98c7-9487ce595560 to disappear
Aug 11 12:51:00.999: INFO: Pod pod-projected-configmaps-13c56baa-c3bc-4e55-98c7-9487ce595560 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:51:00.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4889" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":201,"skipped":3229,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:51:01.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Aug 11 12:51:01.081: INFO: Waiting up to 5m0s for pod "downward-api-5ce6f581-9050-437d-a080-ec0cb05c2ebf" in namespace "downward-api-3114" to be "Succeeded or Failed"
Aug 11 12:51:01.085: INFO: Pod "downward-api-5ce6f581-9050-437d-a080-ec0cb05c2ebf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.757741ms
Aug 11 12:51:03.088: INFO: Pod "downward-api-5ce6f581-9050-437d-a080-ec0cb05c2ebf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006790665s
Aug 11 12:51:05.092: INFO: Pod "downward-api-5ce6f581-9050-437d-a080-ec0cb05c2ebf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011396718s
STEP: Saw pod success
Aug 11 12:51:05.093: INFO: Pod "downward-api-5ce6f581-9050-437d-a080-ec0cb05c2ebf" satisfied condition "Succeeded or Failed"
Aug 11 12:51:05.096: INFO: Trying to get logs from node kali-worker pod downward-api-5ce6f581-9050-437d-a080-ec0cb05c2ebf container dapi-container: 
STEP: delete the pod
Aug 11 12:51:05.123: INFO: Waiting for pod downward-api-5ce6f581-9050-437d-a080-ec0cb05c2ebf to disappear
Aug 11 12:51:05.133: INFO: Pod downward-api-5ce6f581-9050-437d-a080-ec0cb05c2ebf no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:51:05.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3114" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":202,"skipped":3241,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:51:05.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-map-955d5aa9-1753-4fb1-b825-42421e0d9a64
STEP: Creating a pod to test consume secrets
Aug 11 12:51:05.267: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bc4debe2-68bf-40e3-9663-0b9ec8ef75c2" in namespace "projected-8701" to be "Succeeded or Failed"
Aug 11 12:51:05.317: INFO: Pod "pod-projected-secrets-bc4debe2-68bf-40e3-9663-0b9ec8ef75c2": Phase="Pending", Reason="", readiness=false. Elapsed: 49.451945ms
Aug 11 12:51:07.353: INFO: Pod "pod-projected-secrets-bc4debe2-68bf-40e3-9663-0b9ec8ef75c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085332123s
Aug 11 12:51:09.357: INFO: Pod "pod-projected-secrets-bc4debe2-68bf-40e3-9663-0b9ec8ef75c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.089488019s
STEP: Saw pod success
Aug 11 12:51:09.357: INFO: Pod "pod-projected-secrets-bc4debe2-68bf-40e3-9663-0b9ec8ef75c2" satisfied condition "Succeeded or Failed"
Aug 11 12:51:09.360: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-bc4debe2-68bf-40e3-9663-0b9ec8ef75c2 container projected-secret-volume-test: 
STEP: delete the pod
Aug 11 12:51:09.410: INFO: Waiting for pod pod-projected-secrets-bc4debe2-68bf-40e3-9663-0b9ec8ef75c2 to disappear
Aug 11 12:51:09.415: INFO: Pod pod-projected-secrets-bc4debe2-68bf-40e3-9663-0b9ec8ef75c2 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:51:09.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8701" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":203,"skipped":3279,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:51:09.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0811 12:51:50.172566       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 11 12:51:50.172: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:51:50.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4067" for this suite.

• [SLOW TEST:40.757 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":204,"skipped":3283,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:51:50.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:52:07.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1993" for this suite.

• [SLOW TEST:17.236 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":205,"skipped":3301,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:52:07.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Aug 11 12:52:08.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Aug 11 12:52:20.959: INFO: >>> kubeConfig: /root/.kube/config
Aug 11 12:52:22.924: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:52:33.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7398" for this suite.

• [SLOW TEST:26.189 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":206,"skipped":3313,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:52:33.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Aug 11 12:52:33.672: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 11 12:52:33.687: INFO: Waiting for terminating namespaces to be deleted...
Aug 11 12:52:33.690: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Aug 11 12:52:33.696: INFO: rally-19e4df10-30wkw9yu-glqpf from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded)
Aug 11 12:52:33.696: INFO: 	Container rally-19e4df10-30wkw9yu ready: true, restart count 0
Aug 11 12:52:33.696: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded)
Aug 11 12:52:33.696: INFO: 	Container kindnet-cni ready: true, restart count 1
Aug 11 12:52:33.696: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded)
Aug 11 12:52:33.696: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 11 12:52:33.696: INFO: rally-824618b1-6cukkjuh-lb7rq from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:26 +0000 UTC (1 container statuses recorded)
Aug 11 12:52:33.696: INFO: 	Container rally-824618b1-6cukkjuh ready: true, restart count 3
Aug 11 12:52:33.696: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Aug 11 12:52:33.702: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Aug 11 12:52:33.702: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 11 12:52:33.702: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Aug 11 12:52:33.702: INFO: 	Container kindnet-cni ready: true, restart count 1
Aug 11 12:52:33.702: INFO: rally-19e4df10-30wkw9yu-qbmr7 from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded)
Aug 11 12:52:33.702: INFO: 	Container rally-19e4df10-30wkw9yu ready: true, restart count 0
Aug 11 12:52:33.702: INFO: rally-824618b1-6cukkjuh-m84l4 from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:24 +0000 UTC (1 container statuses recorded)
Aug 11 12:52:33.702: INFO: 	Container rally-824618b1-6cukkjuh ready: true, restart count 3
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.162a37c8dee2ba0a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.162a37c8dffece9d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:52:34.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5164" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":275,"completed":207,"skipped":3336,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:52:34.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 11 12:52:34.826: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bf97abf7-93fc-4c49-b867-98fc910ab16e" in namespace "projected-9095" to be "Succeeded or Failed"
Aug 11 12:52:34.830: INFO: Pod "downwardapi-volume-bf97abf7-93fc-4c49-b867-98fc910ab16e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.448847ms
Aug 11 12:52:36.833: INFO: Pod "downwardapi-volume-bf97abf7-93fc-4c49-b867-98fc910ab16e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006721491s
Aug 11 12:52:38.923: INFO: Pod "downwardapi-volume-bf97abf7-93fc-4c49-b867-98fc910ab16e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096323631s
STEP: Saw pod success
Aug 11 12:52:38.923: INFO: Pod "downwardapi-volume-bf97abf7-93fc-4c49-b867-98fc910ab16e" satisfied condition "Succeeded or Failed"
Aug 11 12:52:38.926: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-bf97abf7-93fc-4c49-b867-98fc910ab16e container client-container: 
STEP: delete the pod
Aug 11 12:52:38.987: INFO: Waiting for pod downwardapi-volume-bf97abf7-93fc-4c49-b867-98fc910ab16e to disappear
Aug 11 12:52:39.138: INFO: Pod downwardapi-volume-bf97abf7-93fc-4c49-b867-98fc910ab16e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:52:39.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9095" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":208,"skipped":3364,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:52:39.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 12:52:39.443: INFO: Pod name rollover-pod: Found 0 pods out of 1
Aug 11 12:52:44.447: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 11 12:52:44.447: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Aug 11 12:52:46.450: INFO: Creating deployment "test-rollover-deployment"
Aug 11 12:52:46.509: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Aug 11 12:52:48.516: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Aug 11 12:52:48.521: INFO: Ensure that both replica sets have 1 created replica
Aug 11 12:52:48.526: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Aug 11 12:52:48.533: INFO: Updating deployment test-rollover-deployment
Aug 11 12:52:48.533: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Aug 11 12:52:50.731: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Aug 11 12:52:50.935: INFO: Make sure deployment "test-rollover-deployment" is complete
Aug 11 12:52:50.942: INFO: all replica sets need to contain the pod-template-hash label
Aug 11 12:52:50.942: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747166, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747166, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747168, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747166, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:52:52.950: INFO: all replica sets need to contain the pod-template-hash label
Aug 11 12:52:52.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747166, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747166, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747168, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747166, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:52:54.949: INFO: all replica sets need to contain the pod-template-hash label
Aug 11 12:52:54.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747166, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747166, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747172, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747166, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:52:56.950: INFO: all replica sets need to contain the pod-template-hash label
Aug 11 12:52:56.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747166, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747166, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747172, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747166, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:52:58.950: INFO: all replica sets need to contain the pod-template-hash label
Aug 11 12:52:58.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747166, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747166, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747172, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747166, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:53:00.950: INFO: all replica sets need to contain the pod-template-hash label
Aug 11 12:53:00.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747166, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747166, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747172, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747166, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:53:02.955: INFO: all replica sets need to contain the pod-template-hash label
Aug 11 12:53:02.955: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747166, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747166, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747172, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747166, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:53:04.949: INFO: 
Aug 11 12:53:04.949: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Aug 11 12:53:04.956: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-6740 /apis/apps/v1/namespaces/deployment-6740/deployments/test-rollover-deployment 40730786-8b7c-486d-87ad-cd3187d60509 8570147 2 2020-08-11 12:52:46 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-08-11 12:52:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-11 12:53:03 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001f26fc8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-11 12:52:46 +0000 UTC,LastTransitionTime:2020-08-11 12:52:46 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-84f7f6f64b" has successfully progressed.,LastUpdateTime:2020-08-11 12:53:03 +0000 UTC,LastTransitionTime:2020-08-11 12:52:46 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Aug 11 12:53:04.959: INFO: New ReplicaSet "test-rollover-deployment-84f7f6f64b" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-84f7f6f64b  deployment-6740 /apis/apps/v1/namespaces/deployment-6740/replicasets/test-rollover-deployment-84f7f6f64b 6ccbb60d-e98a-4413-994d-26cc7f9e0e37 8570135 2 2020-08-11 12:52:48 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 40730786-8b7c-486d-87ad-cd3187d60509 0xc005858757 0xc005858758}] []  [{kube-controller-manager Update apps/v1 2020-08-11 12:53:02 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 48 55 51 48 55 56 54 45 56 98 55 99 45 52 56 54 100 45 56 55 97 100 45 99 100 51 49 56 55 100 54 48 53 48 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 84f7f6f64b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0058587e8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 11 12:53:04.959: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Aug 11 12:53:04.959: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-6740 /apis/apps/v1/namespaces/deployment-6740/replicasets/test-rollover-controller 8e0097a8-256d-438f-bf7b-ab838fe8cb83 8570146 2 2020-08-11 12:52:39 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 40730786-8b7c-486d-87ad-cd3187d60509 0xc00585850f 0xc005858520}] []  [{e2e.test Update apps/v1 2020-08-11 12:52:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-11 12:53:03 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 48 55 51 48 55 56 54 45 56 98 55 99 45 52 56 54 100 45 56 55 97 100 45 99 100 51 49 56 55 100 54 48 53 48 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0058585c8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 11 12:53:04.960: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5  deployment-6740 /apis/apps/v1/namespaces/deployment-6740/replicasets/test-rollover-deployment-5686c4cfd5 42b6d579-21bf-4022-a36c-1490a2f7087d 8570085 2 2020-08-11 12:52:46 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 40730786-8b7c-486d-87ad-cd3187d60509 0xc005858647 0xc005858648}] []  [{kube-controller-manager Update apps/v1 2020-08-11 12:52:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 48 55 51 48 55 56 54 45 56 98 55 99 45 52 56 54 100 45 56 55 97 100 45 99 100 51 49 56 55 100 54 48 53 48 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 114 101 100 105 115 45 115 108 97 118 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0058586d8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 11 12:53:04.963: INFO: Pod "test-rollover-deployment-84f7f6f64b-9mjxj" is available:
&Pod{ObjectMeta:{test-rollover-deployment-84f7f6f64b-9mjxj test-rollover-deployment-84f7f6f64b- deployment-6740 /api/v1/namespaces/deployment-6740/pods/test-rollover-deployment-84f7f6f64b-9mjxj 8df1eb75-1aff-4f22-a909-a4b7a79fc6f4 8570103 0 2020-08-11 12:52:48 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [{apps/v1 ReplicaSet test-rollover-deployment-84f7f6f64b 6ccbb60d-e98a-4413-994d-26cc7f9e0e37 0xc005858e97 0xc005858e98}] []  [{kube-controller-manager Update v1 2020-08-11 12:52:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 99 99 98 98 54 48 100 45 101 57 56 97 45 52 52 49 51 45 57 57 52 100 45 50 54 99 99 55 102 57 101 48 101 51 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 12:52:52 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 54 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6nckk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6nckk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6nckk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 12:52:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 12:52:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 12:52:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 12:52:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.164,StartTime:2020-08-11 12:52:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-11 12:52:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://95cef40ae9fc37dcfdd902887a24b2f6ea865bcdfd3747326f8728bea6be43e9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.164,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:53:04.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6740" for this suite.

• [SLOW TEST:25.886 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":209,"skipped":3377,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:53:05.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-71489912-6d4a-4298-9e05-94a201cf4127
STEP: Creating a pod to test consume configMaps
Aug 11 12:53:05.164: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8e228eb3-e2b9-4c26-8432-6393eb299905" in namespace "projected-6091" to be "Succeeded or Failed"
Aug 11 12:53:05.410: INFO: Pod "pod-projected-configmaps-8e228eb3-e2b9-4c26-8432-6393eb299905": Phase="Pending", Reason="", readiness=false. Elapsed: 246.299429ms
Aug 11 12:53:07.414: INFO: Pod "pod-projected-configmaps-8e228eb3-e2b9-4c26-8432-6393eb299905": Phase="Pending", Reason="", readiness=false. Elapsed: 2.250860196s
Aug 11 12:53:09.551: INFO: Pod "pod-projected-configmaps-8e228eb3-e2b9-4c26-8432-6393eb299905": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.387638035s
STEP: Saw pod success
Aug 11 12:53:09.551: INFO: Pod "pod-projected-configmaps-8e228eb3-e2b9-4c26-8432-6393eb299905" satisfied condition "Succeeded or Failed"
Aug 11 12:53:09.554: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-8e228eb3-e2b9-4c26-8432-6393eb299905 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 11 12:53:09.690: INFO: Waiting for pod pod-projected-configmaps-8e228eb3-e2b9-4c26-8432-6393eb299905 to disappear
Aug 11 12:53:09.706: INFO: Pod pod-projected-configmaps-8e228eb3-e2b9-4c26-8432-6393eb299905 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:53:09.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6091" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":210,"skipped":3405,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:53:09.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 11 12:53:10.450: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 11 12:53:12.462: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747190, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747190, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747190, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747190, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 11 12:53:15.886: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:53:28.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1005" for this suite.
STEP: Destroying namespace "webhook-1005-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:18.672 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":211,"skipped":3418,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:53:28.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap configmap-1559/configmap-test-7ba126b5-a33c-43cd-b30a-3b2ee4c002e5
STEP: Creating a pod to test consume configMaps
Aug 11 12:53:28.540: INFO: Waiting up to 5m0s for pod "pod-configmaps-a72a5795-56b5-4cf4-9b36-91de2104141a" in namespace "configmap-1559" to be "Succeeded or Failed"
Aug 11 12:53:28.935: INFO: Pod "pod-configmaps-a72a5795-56b5-4cf4-9b36-91de2104141a": Phase="Pending", Reason="", readiness=false. Elapsed: 394.2662ms
Aug 11 12:53:30.940: INFO: Pod "pod-configmaps-a72a5795-56b5-4cf4-9b36-91de2104141a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.399263799s
Aug 11 12:53:32.944: INFO: Pod "pod-configmaps-a72a5795-56b5-4cf4-9b36-91de2104141a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.403276468s
STEP: Saw pod success
Aug 11 12:53:32.944: INFO: Pod "pod-configmaps-a72a5795-56b5-4cf4-9b36-91de2104141a" satisfied condition "Succeeded or Failed"
Aug 11 12:53:32.947: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-a72a5795-56b5-4cf4-9b36-91de2104141a container env-test: 
STEP: delete the pod
Aug 11 12:53:32.985: INFO: Waiting for pod pod-configmaps-a72a5795-56b5-4cf4-9b36-91de2104141a to disappear
Aug 11 12:53:33.006: INFO: Pod pod-configmaps-a72a5795-56b5-4cf4-9b36-91de2104141a no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:53:33.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1559" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":212,"skipped":3430,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:53:33.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 11 12:53:37.620: INFO: Successfully updated pod "pod-update-activedeadlineseconds-4c305787-a078-4dd8-aad6-d834ff0e0b6e"
Aug 11 12:53:37.620: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-4c305787-a078-4dd8-aad6-d834ff0e0b6e" in namespace "pods-3178" to be "terminated due to deadline exceeded"
Aug 11 12:53:37.637: INFO: Pod "pod-update-activedeadlineseconds-4c305787-a078-4dd8-aad6-d834ff0e0b6e": Phase="Running", Reason="", readiness=true. Elapsed: 16.983783ms
Aug 11 12:53:39.640: INFO: Pod "pod-update-activedeadlineseconds-4c305787-a078-4dd8-aad6-d834ff0e0b6e": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.020563078s
Aug 11 12:53:39.640: INFO: Pod "pod-update-activedeadlineseconds-4c305787-a078-4dd8-aad6-d834ff0e0b6e" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:53:39.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3178" for this suite.

• [SLOW TEST:6.657 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":213,"skipped":3450,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:53:39.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 12:53:39.726: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:53:45.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1844" for this suite.

• [SLOW TEST:6.324 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
    listing custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":275,"completed":214,"skipped":3463,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:53:45.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-b92b590a-75ca-4c31-85a7-22568bd69073
STEP: Creating a pod to test consume configMaps
Aug 11 12:53:46.058: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4baa102e-c052-4ad7-ba25-520845a6f015" in namespace "projected-1018" to be "Succeeded or Failed"
Aug 11 12:53:46.128: INFO: Pod "pod-projected-configmaps-4baa102e-c052-4ad7-ba25-520845a6f015": Phase="Pending", Reason="", readiness=false. Elapsed: 70.051387ms
Aug 11 12:53:48.258: INFO: Pod "pod-projected-configmaps-4baa102e-c052-4ad7-ba25-520845a6f015": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200133828s
Aug 11 12:53:50.262: INFO: Pod "pod-projected-configmaps-4baa102e-c052-4ad7-ba25-520845a6f015": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.203700817s
STEP: Saw pod success
Aug 11 12:53:50.262: INFO: Pod "pod-projected-configmaps-4baa102e-c052-4ad7-ba25-520845a6f015" satisfied condition "Succeeded or Failed"
Aug 11 12:53:50.264: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-4baa102e-c052-4ad7-ba25-520845a6f015 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 11 12:53:50.725: INFO: Waiting for pod pod-projected-configmaps-4baa102e-c052-4ad7-ba25-520845a6f015 to disappear
Aug 11 12:53:50.761: INFO: Pod pod-projected-configmaps-4baa102e-c052-4ad7-ba25-520845a6f015 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:53:50.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1018" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":215,"skipped":3530,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:53:50.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 11 12:53:51.442: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 11 12:53:53.450: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747231, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747231, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747231, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747231, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 11 12:53:56.516: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 12:53:56.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9716-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:53:57.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6707" for this suite.
STEP: Destroying namespace "webhook-6707-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.012 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":216,"skipped":3556,"failed":0}
SSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:53:57.782: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service endpoint-test2 in namespace services-9314
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9314 to expose endpoints map[]
Aug 11 12:53:57.952: INFO: Get endpoints failed (35.058042ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Aug 11 12:53:59.072: INFO: successfully validated that service endpoint-test2 in namespace services-9314 exposes endpoints map[] (1.15501677s elapsed)
STEP: Creating pod pod1 in namespace services-9314
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9314 to expose endpoints map[pod1:[80]]
Aug 11 12:54:03.450: INFO: successfully validated that service endpoint-test2 in namespace services-9314 exposes endpoints map[pod1:[80]] (4.363058043s elapsed)
STEP: Creating pod pod2 in namespace services-9314
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9314 to expose endpoints map[pod1:[80] pod2:[80]]
Aug 11 12:54:07.842: INFO: successfully validated that service endpoint-test2 in namespace services-9314 exposes endpoints map[pod1:[80] pod2:[80]] (4.388159739s elapsed)
STEP: Deleting pod pod1 in namespace services-9314
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9314 to expose endpoints map[pod2:[80]]
Aug 11 12:54:07.937: INFO: successfully validated that service endpoint-test2 in namespace services-9314 exposes endpoints map[pod2:[80]] (89.938795ms elapsed)
STEP: Deleting pod pod2 in namespace services-9314
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9314 to expose endpoints map[]
Aug 11 12:54:09.005: INFO: successfully validated that service endpoint-test2 in namespace services-9314 exposes endpoints map[] (1.063675922s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:54:09.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9314" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:11.311 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":275,"completed":217,"skipped":3559,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:54:09.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service nodeport-test with type=NodePort in namespace services-8193
STEP: creating replication controller nodeport-test in namespace services-8193
I0811 12:54:09.639198       7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-8193, replica count: 2
I0811 12:54:12.689676       7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0811 12:54:15.689885       7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 11 12:54:15.689: INFO: Creating new exec pod
Aug 11 12:54:24.729: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-8193 execpodj6mmq -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Aug 11 12:54:24.965: INFO: stderr: "I0811 12:54:24.857596    2344 log.go:172] (0xc000559ad0) (0xc00090a0a0) Create stream\nI0811 12:54:24.857659    2344 log.go:172] (0xc000559ad0) (0xc00090a0a0) Stream added, broadcasting: 1\nI0811 12:54:24.865521    2344 log.go:172] (0xc000559ad0) Reply frame received for 1\nI0811 12:54:24.865562    2344 log.go:172] (0xc000559ad0) (0xc0007bf400) Create stream\nI0811 12:54:24.865573    2344 log.go:172] (0xc000559ad0) (0xc0007bf400) Stream added, broadcasting: 3\nI0811 12:54:24.866689    2344 log.go:172] (0xc000559ad0) Reply frame received for 3\nI0811 12:54:24.866723    2344 log.go:172] (0xc000559ad0) (0xc00090a1e0) Create stream\nI0811 12:54:24.866736    2344 log.go:172] (0xc000559ad0) (0xc00090a1e0) Stream added, broadcasting: 5\nI0811 12:54:24.867585    2344 log.go:172] (0xc000559ad0) Reply frame received for 5\nI0811 12:54:24.954697    2344 log.go:172] (0xc000559ad0) Data frame received for 5\nI0811 12:54:24.954725    2344 log.go:172] (0xc00090a1e0) (5) Data frame handling\nI0811 12:54:24.954741    2344 log.go:172] (0xc00090a1e0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0811 12:54:24.955666    2344 log.go:172] (0xc000559ad0) Data frame received for 5\nI0811 12:54:24.955689    2344 log.go:172] (0xc00090a1e0) (5) Data frame handling\nI0811 12:54:24.955702    2344 log.go:172] (0xc00090a1e0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0811 12:54:24.955962    2344 log.go:172] (0xc000559ad0) Data frame received for 5\nI0811 12:54:24.955984    2344 log.go:172] (0xc00090a1e0) (5) Data frame handling\nI0811 12:54:24.956004    2344 log.go:172] (0xc000559ad0) Data frame received for 3\nI0811 12:54:24.956022    2344 log.go:172] (0xc0007bf400) (3) Data frame handling\nI0811 12:54:24.957620    2344 log.go:172] (0xc000559ad0) Data frame received for 1\nI0811 12:54:24.957640    2344 log.go:172] (0xc00090a0a0) (1) Data frame handling\nI0811 12:54:24.957662    2344 log.go:172] (0xc00090a0a0) (1) Data frame sent\nI0811 12:54:24.957862    2344 log.go:172] (0xc000559ad0) (0xc00090a0a0) Stream removed, broadcasting: 1\nI0811 12:54:24.957886    2344 log.go:172] (0xc000559ad0) Go away received\nI0811 12:54:24.958357    2344 log.go:172] (0xc000559ad0) (0xc00090a0a0) Stream removed, broadcasting: 1\nI0811 12:54:24.958386    2344 log.go:172] (0xc000559ad0) (0xc0007bf400) Stream removed, broadcasting: 3\nI0811 12:54:24.958404    2344 log.go:172] (0xc000559ad0) (0xc00090a1e0) Stream removed, broadcasting: 5\n"
Aug 11 12:54:24.965: INFO: stdout: ""
Aug 11 12:54:24.966: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-8193 execpodj6mmq -- /bin/sh -x -c nc -zv -t -w 2 10.108.141.57 80'
Aug 11 12:54:25.171: INFO: stderr: "I0811 12:54:25.083739    2367 log.go:172] (0xc00003a790) (0xc00090c0a0) Create stream\nI0811 12:54:25.083793    2367 log.go:172] (0xc00003a790) (0xc00090c0a0) Stream added, broadcasting: 1\nI0811 12:54:25.086186    2367 log.go:172] (0xc00003a790) Reply frame received for 1\nI0811 12:54:25.086232    2367 log.go:172] (0xc00003a790) (0xc000711220) Create stream\nI0811 12:54:25.086251    2367 log.go:172] (0xc00003a790) (0xc000711220) Stream added, broadcasting: 3\nI0811 12:54:25.087110    2367 log.go:172] (0xc00003a790) Reply frame received for 3\nI0811 12:54:25.087146    2367 log.go:172] (0xc00003a790) (0xc0005ea000) Create stream\nI0811 12:54:25.087156    2367 log.go:172] (0xc00003a790) (0xc0005ea000) Stream added, broadcasting: 5\nI0811 12:54:25.087878    2367 log.go:172] (0xc00003a790) Reply frame received for 5\nI0811 12:54:25.163455    2367 log.go:172] (0xc00003a790) Data frame received for 3\nI0811 12:54:25.163488    2367 log.go:172] (0xc000711220) (3) Data frame handling\nI0811 12:54:25.163796    2367 log.go:172] (0xc00003a790) Data frame received for 5\nI0811 12:54:25.163814    2367 log.go:172] (0xc0005ea000) (5) Data frame handling\nI0811 12:54:25.163827    2367 log.go:172] (0xc0005ea000) (5) Data frame sent\nI0811 12:54:25.163835    2367 log.go:172] (0xc00003a790) Data frame received for 5\nI0811 12:54:25.163843    2367 log.go:172] (0xc0005ea000) (5) Data frame handling\n+ nc -zv -t -w 2 10.108.141.57 80\nConnection to 10.108.141.57 80 port [tcp/http] succeeded!\nI0811 12:54:25.165184    2367 log.go:172] (0xc00003a790) Data frame received for 1\nI0811 12:54:25.165219    2367 log.go:172] (0xc00090c0a0) (1) Data frame handling\nI0811 12:54:25.165231    2367 log.go:172] (0xc00090c0a0) (1) Data frame sent\nI0811 12:54:25.165246    2367 log.go:172] (0xc00003a790) (0xc00090c0a0) Stream removed, broadcasting: 1\nI0811 12:54:25.165263    2367 log.go:172] (0xc00003a790) Go away received\nI0811 12:54:25.165590    2367 log.go:172] (0xc00003a790) (0xc00090c0a0) Stream removed, broadcasting: 1\nI0811 12:54:25.165621    2367 log.go:172] (0xc00003a790) (0xc000711220) Stream removed, broadcasting: 3\nI0811 12:54:25.165640    2367 log.go:172] (0xc00003a790) (0xc0005ea000) Stream removed, broadcasting: 5\n"
Aug 11 12:54:25.172: INFO: stdout: ""
Aug 11 12:54:25.172: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-8193 execpodj6mmq -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 31900'
Aug 11 12:54:25.379: INFO: stderr: "I0811 12:54:25.293241    2388 log.go:172] (0xc00099c0b0) (0xc00085c0a0) Create stream\nI0811 12:54:25.293286    2388 log.go:172] (0xc00099c0b0) (0xc00085c0a0) Stream added, broadcasting: 1\nI0811 12:54:25.295882    2388 log.go:172] (0xc00099c0b0) Reply frame received for 1\nI0811 12:54:25.295919    2388 log.go:172] (0xc00099c0b0) (0xc000972000) Create stream\nI0811 12:54:25.295927    2388 log.go:172] (0xc00099c0b0) (0xc000972000) Stream added, broadcasting: 3\nI0811 12:54:25.296884    2388 log.go:172] (0xc00099c0b0) Reply frame received for 3\nI0811 12:54:25.296935    2388 log.go:172] (0xc00099c0b0) (0xc0009bc000) Create stream\nI0811 12:54:25.296951    2388 log.go:172] (0xc00099c0b0) (0xc0009bc000) Stream added, broadcasting: 5\nI0811 12:54:25.297842    2388 log.go:172] (0xc00099c0b0) Reply frame received for 5\nI0811 12:54:25.370005    2388 log.go:172] (0xc00099c0b0) Data frame received for 3\nI0811 12:54:25.370041    2388 log.go:172] (0xc000972000) (3) Data frame handling\nI0811 12:54:25.370065    2388 log.go:172] (0xc00099c0b0) Data frame received for 5\nI0811 12:54:25.370074    2388 log.go:172] (0xc0009bc000) (5) Data frame handling\nI0811 12:54:25.370085    2388 log.go:172] (0xc0009bc000) (5) Data frame sent\nI0811 12:54:25.370095    2388 log.go:172] (0xc00099c0b0) Data frame received for 5\nI0811 12:54:25.370116    2388 log.go:172] (0xc0009bc000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 31900\nConnection to 172.18.0.13 31900 port [tcp/31900] succeeded!\nI0811 12:54:25.371484    2388 log.go:172] (0xc00099c0b0) Data frame received for 1\nI0811 12:54:25.371522    2388 log.go:172] (0xc00085c0a0) (1) Data frame handling\nI0811 12:54:25.371546    2388 log.go:172] (0xc00085c0a0) (1) Data frame sent\nI0811 12:54:25.371574    2388 log.go:172] (0xc00099c0b0) (0xc00085c0a0) Stream removed, broadcasting: 1\nI0811 12:54:25.371601    2388 log.go:172] (0xc00099c0b0) Go away received\nI0811 12:54:25.371868    2388 log.go:172] (0xc00099c0b0) (0xc00085c0a0) Stream removed, broadcasting: 1\nI0811 12:54:25.371883    2388 log.go:172] (0xc00099c0b0) (0xc000972000) Stream removed, broadcasting: 3\nI0811 12:54:25.371889    2388 log.go:172] (0xc00099c0b0) (0xc0009bc000) Stream removed, broadcasting: 5\n"
Aug 11 12:54:25.379: INFO: stdout: ""
Aug 11 12:54:25.379: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-8193 execpodj6mmq -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 31900'
Aug 11 12:54:25.608: INFO: stderr: "I0811 12:54:25.517408    2412 log.go:172] (0xc000ab40b0) (0xc0007ff680) Create stream\nI0811 12:54:25.517467    2412 log.go:172] (0xc000ab40b0) (0xc0007ff680) Stream added, broadcasting: 1\nI0811 12:54:25.520563    2412 log.go:172] (0xc000ab40b0) Reply frame received for 1\nI0811 12:54:25.520600    2412 log.go:172] (0xc000ab40b0) (0xc0007ff720) Create stream\nI0811 12:54:25.520608    2412 log.go:172] (0xc000ab40b0) (0xc0007ff720) Stream added, broadcasting: 3\nI0811 12:54:25.521748    2412 log.go:172] (0xc000ab40b0) Reply frame received for 3\nI0811 12:54:25.521817    2412 log.go:172] (0xc000ab40b0) (0xc0007ff7c0) Create stream\nI0811 12:54:25.521846    2412 log.go:172] (0xc000ab40b0) (0xc0007ff7c0) Stream added, broadcasting: 5\nI0811 12:54:25.522785    2412 log.go:172] (0xc000ab40b0) Reply frame received for 5\nI0811 12:54:25.601272    2412 log.go:172] (0xc000ab40b0) Data frame received for 5\nI0811 12:54:25.601305    2412 log.go:172] (0xc0007ff7c0) (5) Data frame handling\nI0811 12:54:25.601314    2412 log.go:172] (0xc0007ff7c0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.15 31900\nConnection to 172.18.0.15 31900 port [tcp/31900] succeeded!\nI0811 12:54:25.601326    2412 log.go:172] (0xc000ab40b0) Data frame received for 3\nI0811 12:54:25.601333    2412 log.go:172] (0xc0007ff720) (3) Data frame handling\nI0811 12:54:25.601492    2412 log.go:172] (0xc000ab40b0) Data frame received for 5\nI0811 12:54:25.601518    2412 log.go:172] (0xc0007ff7c0) (5) Data frame handling\nI0811 12:54:25.602757    2412 log.go:172] (0xc000ab40b0) Data frame received for 1\nI0811 12:54:25.602772    2412 log.go:172] (0xc0007ff680) (1) Data frame handling\nI0811 12:54:25.602785    2412 log.go:172] (0xc0007ff680) (1) Data frame sent\nI0811 12:54:25.602918    2412 log.go:172] (0xc000ab40b0) (0xc0007ff680) Stream removed, broadcasting: 1\nI0811 12:54:25.602969    2412 log.go:172] (0xc000ab40b0) Go away received\nI0811 12:54:25.603284    2412 log.go:172] (0xc000ab40b0) (0xc0007ff680) Stream removed, broadcasting: 1\nI0811 12:54:25.603304    2412 log.go:172] (0xc000ab40b0) (0xc0007ff720) Stream removed, broadcasting: 3\nI0811 12:54:25.603314    2412 log.go:172] (0xc000ab40b0) (0xc0007ff7c0) Stream removed, broadcasting: 5\n"
Aug 11 12:54:25.608: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:54:25.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8193" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:16.548 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":218,"skipped":3572,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:54:25.641: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 11 12:54:25.724: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af1fad59-33eb-4c7c-9b4c-4f5444d7ddbf" in namespace "projected-8809" to be "Succeeded or Failed"
Aug 11 12:54:25.740: INFO: Pod "downwardapi-volume-af1fad59-33eb-4c7c-9b4c-4f5444d7ddbf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.239527ms
Aug 11 12:54:28.442: INFO: Pod "downwardapi-volume-af1fad59-33eb-4c7c-9b4c-4f5444d7ddbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.718225035s
Aug 11 12:54:30.446: INFO: Pod "downwardapi-volume-af1fad59-33eb-4c7c-9b4c-4f5444d7ddbf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.722512384s
Aug 11 12:54:32.552: INFO: Pod "downwardapi-volume-af1fad59-33eb-4c7c-9b4c-4f5444d7ddbf": Phase="Running", Reason="", readiness=true. Elapsed: 6.827948042s
Aug 11 12:54:34.565: INFO: Pod "downwardapi-volume-af1fad59-33eb-4c7c-9b4c-4f5444d7ddbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.841443656s
STEP: Saw pod success
Aug 11 12:54:34.565: INFO: Pod "downwardapi-volume-af1fad59-33eb-4c7c-9b4c-4f5444d7ddbf" satisfied condition "Succeeded or Failed"
Aug 11 12:54:34.569: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-af1fad59-33eb-4c7c-9b4c-4f5444d7ddbf container client-container: 
STEP: delete the pod
Aug 11 12:54:34.901: INFO: Waiting for pod downwardapi-volume-af1fad59-33eb-4c7c-9b4c-4f5444d7ddbf to disappear
Aug 11 12:54:34.926: INFO: Pod downwardapi-volume-af1fad59-33eb-4c7c-9b4c-4f5444d7ddbf no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:54:34.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8809" for this suite.

• [SLOW TEST:9.517 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":219,"skipped":3587,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:54:35.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 12:54:35.375: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2114'
Aug 11 12:54:35.896: INFO: stderr: ""
Aug 11 12:54:35.896: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Aug 11 12:54:35.896: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2114'
Aug 11 12:54:36.862: INFO: stderr: ""
Aug 11 12:54:36.862: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 11 12:54:37.867: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 11 12:54:37.868: INFO: Found 0 / 1
Aug 11 12:54:38.865: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 11 12:54:38.865: INFO: Found 0 / 1
Aug 11 12:54:39.875: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 11 12:54:39.875: INFO: Found 1 / 1
Aug 11 12:54:39.875: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 11 12:54:39.877: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 11 12:54:39.877: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 11 12:54:39.877: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe pod agnhost-master-v2k2j --namespace=kubectl-2114'
Aug 11 12:54:40.212: INFO: stderr: ""
Aug 11 12:54:40.212: INFO: stdout: "Name:         agnhost-master-v2k2j\nNamespace:    kubectl-2114\nPriority:     0\nNode:         kali-worker2/172.18.0.15\nStart Time:   Tue, 11 Aug 2020 12:54:35 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.244.1.158\nIPs:\n  IP:           10.244.1.158\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://b8d4a1f2d6e4d59579689db4b9bc929460a5663126b1646defdb8df75c9d281f\n    Image:          us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Image ID:       us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Tue, 11 Aug 2020 12:54:38 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-58xsz (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-58xsz:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-58xsz\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                   Message\n  ----    ------     ----       ----                   -------\n  Normal  Scheduled    default-scheduler      Successfully assigned kubectl-2114/agnhost-master-v2k2j to kali-worker2\n  Normal  Pulled     3s         kubelet, kali-worker2  Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n  Normal  Created    2s         kubelet, kali-worker2  Created container agnhost-master\n  Normal  Started    1s         kubelet, kali-worker2  Started container agnhost-master\n"
Aug 11 12:54:40.213: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-2114'
Aug 11 12:54:40.557: INFO: stderr: ""
Aug 11 12:54:40.557: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-2114\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  5s    replication-controller  Created pod: agnhost-master-v2k2j\n"
Aug 11 12:54:40.557: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-2114'
Aug 11 12:54:40.725: INFO: stderr: ""
Aug 11 12:54:40.725: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-2114\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.101.195.54\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.1.158:6379\nSession Affinity:  None\nEvents:            \n"
Aug 11 12:54:40.729: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe node kali-control-plane'
Aug 11 12:54:40.848: INFO: stderr: ""
Aug 11 12:54:40.848: INFO: stdout: "Name:               kali-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=kali-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Fri, 10 Jul 2020 10:27:46 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  kali-control-plane\n  AcquireTime:     \n  RenewTime:       Tue, 11 Aug 2020 12:54:32 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Tue, 11 Aug 2020 12:50:16 +0000   Fri, 10 Jul 2020 10:27:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Tue, 11 Aug 2020 12:50:16 +0000   Fri, 10 Jul 2020 10:27:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Tue, 11 Aug 2020 12:50:16 +0000   Fri, 10 Jul 2020 10:27:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Tue, 11 Aug 2020 12:50:16 +0000   Fri, 10 Jul 2020 10:28:23 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.16\n  Hostname:    kali-control-plane\nCapacity:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759872Ki\n  pods:               110\nAllocatable:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759872Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 d83d42c4b42d4de1b3233683d9cadf95\n  System UUID:                e06c57c7-ce4f-4ae9-8bb6-40f1dc0e1a64\n  Boot ID:                    11738d2d-5baa-4089-8e7f-2fb0329fce58\n  Kernel Version:             4.15.0-109-generic\n  OS Image:                   Ubuntu 20.04 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.4.0-beta.1-34-g49b0743c\n  Kubelet Version:            v1.18.4\n  Kube-Proxy Version:         v1.18.4\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-66bff467f8-qtcqs                      100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     32d\n  kube-system                 coredns-66bff467f8-tjkg9                      100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     32d\n  kube-system                 etcd-kali-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         32d\n  kube-system                 kindnet-zxw2f                                 100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      32d\n  kube-system                 kube-apiserver-kali-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         32d\n  kube-system                 kube-controller-manager-kali-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         32d\n  kube-system                 kube-proxy-xmqbs                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         32d\n  kube-system                 kube-scheduler-kali-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         32d\n  local-path-storage          local-path-provisioner-67795f75bd-clsb6       0 (0%)        0 (0%)      0 (0%)           0 (0%)         32d\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\n  hugepages-1Gi      0 (0%)      0 (0%)\n  hugepages-2Mi      0 (0%)      0 (0%)\nEvents:              \n"
Aug 11 12:54:40.849: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe namespace kubectl-2114'
Aug 11 12:54:40.939: INFO: stderr: ""
Aug 11 12:54:40.939: INFO: stdout: "Name:         kubectl-2114\nLabels:       e2e-framework=kubectl\n              e2e-run=8c2d6a9a-828b-42e7-bcb2-130a622968b9\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:54:40.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2114" for this suite.

• [SLOW TEST:5.786 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:978
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":275,"completed":220,"skipped":3605,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:54:40.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Aug 11 12:54:43.570: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Aug 11 12:54:46.108: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747283, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747283, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747284, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747282, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:54:48.223: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747283, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747283, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747284, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747282, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:54:50.247: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747283, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747283, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747284, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747282, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:54:52.420: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747283, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747283, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747284, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747282, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:54:54.587: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747283, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747283, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747284, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747282, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:54:56.120: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747283, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747283, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747284, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747282, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 11 12:54:59.621: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 12:54:59.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:55:01.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-541" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137

• [SLOW TEST:20.194 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":221,"skipped":3607,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:55:01.139: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 11 12:55:02.606: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 11 12:55:04.918: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747303, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747303, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747303, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747302, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:55:07.460: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747303, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747303, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747303, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747302, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:55:08.940: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747303, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747303, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747303, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747302, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:55:11.043: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747303, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747303, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747303, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747302, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:55:13.217: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747303, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747303, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747303, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747302, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 11 12:55:15.999: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:55:16.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-488" for this suite.
STEP: Destroying namespace "webhook-488-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:15.145 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":222,"skipped":3642,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:55:16.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating replication controller my-hostname-basic-11e06583-cd6b-4c13-bdfa-4c823462a626
Aug 11 12:55:16.401: INFO: Pod name my-hostname-basic-11e06583-cd6b-4c13-bdfa-4c823462a626: Found 0 pods out of 1
Aug 11 12:55:21.487: INFO: Pod name my-hostname-basic-11e06583-cd6b-4c13-bdfa-4c823462a626: Found 1 pods out of 1
Aug 11 12:55:21.487: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-11e06583-cd6b-4c13-bdfa-4c823462a626" are running
Aug 11 12:55:21.555: INFO: Pod "my-hostname-basic-11e06583-cd6b-4c13-bdfa-4c823462a626-r57qh" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-11 12:55:16 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-11 12:55:19 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-11 12:55:19 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-11 12:55:16 +0000 UTC Reason: Message:}])
Aug 11 12:55:21.555: INFO: Trying to dial the pod
Aug 11 12:55:26.566: INFO: Controller my-hostname-basic-11e06583-cd6b-4c13-bdfa-4c823462a626: Got expected result from replica 1 [my-hostname-basic-11e06583-cd6b-4c13-bdfa-4c823462a626-r57qh]: "my-hostname-basic-11e06583-cd6b-4c13-bdfa-4c823462a626-r57qh", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:55:26.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1548" for this suite.

• [SLOW TEST:10.290 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":275,"completed":223,"skipped":3650,"failed":0}
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:55:26.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-1b69b07e-39b8-4661-8743-143732b52e54
STEP: Creating a pod to test consume configMaps
Aug 11 12:55:27.612: INFO: Waiting up to 5m0s for pod "pod-configmaps-e53d7ddb-01e8-49b3-a09f-974934c03f35" in namespace "configmap-4404" to be "Succeeded or Failed"
Aug 11 12:55:27.738: INFO: Pod "pod-configmaps-e53d7ddb-01e8-49b3-a09f-974934c03f35": Phase="Pending", Reason="", readiness=false. Elapsed: 125.13943ms
Aug 11 12:55:30.247: INFO: Pod "pod-configmaps-e53d7ddb-01e8-49b3-a09f-974934c03f35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.634613969s
Aug 11 12:55:32.966: INFO: Pod "pod-configmaps-e53d7ddb-01e8-49b3-a09f-974934c03f35": Phase="Pending", Reason="", readiness=false. Elapsed: 5.353725862s
Aug 11 12:55:35.013: INFO: Pod "pod-configmaps-e53d7ddb-01e8-49b3-a09f-974934c03f35": Phase="Pending", Reason="", readiness=false. Elapsed: 7.40024991s
Aug 11 12:55:37.109: INFO: Pod "pod-configmaps-e53d7ddb-01e8-49b3-a09f-974934c03f35": Phase="Running", Reason="", readiness=true. Elapsed: 9.496708661s
Aug 11 12:55:39.113: INFO: Pod "pod-configmaps-e53d7ddb-01e8-49b3-a09f-974934c03f35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.500907184s
STEP: Saw pod success
Aug 11 12:55:39.113: INFO: Pod "pod-configmaps-e53d7ddb-01e8-49b3-a09f-974934c03f35" satisfied condition "Succeeded or Failed"
Aug 11 12:55:39.116: INFO: Trying to get logs from node kali-worker pod pod-configmaps-e53d7ddb-01e8-49b3-a09f-974934c03f35 container configmap-volume-test: 
STEP: delete the pod
Aug 11 12:55:39.157: INFO: Waiting for pod pod-configmaps-e53d7ddb-01e8-49b3-a09f-974934c03f35 to disappear
Aug 11 12:55:39.174: INFO: Pod pod-configmaps-e53d7ddb-01e8-49b3-a09f-974934c03f35 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:55:39.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4404" for this suite.

• [SLOW TEST:12.609 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":224,"skipped":3650,"failed":0}
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:55:39.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 12:55:39.276: INFO: Waiting up to 5m0s for pod "busybox-user-65534-3c111d49-1e32-4b01-9bea-a974d3f6d724" in namespace "security-context-test-796" to be "Succeeded or Failed"
Aug 11 12:55:39.409: INFO: Pod "busybox-user-65534-3c111d49-1e32-4b01-9bea-a974d3f6d724": Phase="Pending", Reason="", readiness=false. Elapsed: 132.733911ms
Aug 11 12:55:41.484: INFO: Pod "busybox-user-65534-3c111d49-1e32-4b01-9bea-a974d3f6d724": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208085962s
Aug 11 12:55:43.532: INFO: Pod "busybox-user-65534-3c111d49-1e32-4b01-9bea-a974d3f6d724": Phase="Pending", Reason="", readiness=false. Elapsed: 4.255919119s
Aug 11 12:55:45.545: INFO: Pod "busybox-user-65534-3c111d49-1e32-4b01-9bea-a974d3f6d724": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.26929924s
Aug 11 12:55:45.545: INFO: Pod "busybox-user-65534-3c111d49-1e32-4b01-9bea-a974d3f6d724" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:55:45.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-796" for this suite.

• [SLOW TEST:6.400 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  When creating a container with runAsUser
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":225,"skipped":3650,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:55:45.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating cluster-info
Aug 11 12:55:45.654: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config cluster-info'
Aug 11 12:55:45.762: INFO: stderr: ""
Aug 11 12:55:45.762: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:35995\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:35995/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:55:45.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6028" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":275,"completed":226,"skipped":3658,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:55:45.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 11 12:55:45.885: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e1dc5b5f-d79e-423d-80a9-bfc38cea1d00" in namespace "downward-api-8018" to be "Succeeded or Failed"
Aug 11 12:55:45.910: INFO: Pod "downwardapi-volume-e1dc5b5f-d79e-423d-80a9-bfc38cea1d00": Phase="Pending", Reason="", readiness=false. Elapsed: 25.09755ms
Aug 11 12:55:48.074: INFO: Pod "downwardapi-volume-e1dc5b5f-d79e-423d-80a9-bfc38cea1d00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.189552121s
Aug 11 12:55:50.145: INFO: Pod "downwardapi-volume-e1dc5b5f-d79e-423d-80a9-bfc38cea1d00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.260743727s
STEP: Saw pod success
Aug 11 12:55:50.145: INFO: Pod "downwardapi-volume-e1dc5b5f-d79e-423d-80a9-bfc38cea1d00" satisfied condition "Succeeded or Failed"
Aug 11 12:55:50.149: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-e1dc5b5f-d79e-423d-80a9-bfc38cea1d00 container client-container: 
STEP: delete the pod
Aug 11 12:55:50.183: INFO: Waiting for pod downwardapi-volume-e1dc5b5f-d79e-423d-80a9-bfc38cea1d00 to disappear
Aug 11 12:55:50.192: INFO: Pod downwardapi-volume-e1dc5b5f-d79e-423d-80a9-bfc38cea1d00 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:55:50.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8018" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":227,"skipped":3684,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:55:50.200: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating api versions
Aug 11 12:55:50.320: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config api-versions'
Aug 11 12:55:50.560: INFO: stderr: ""
Aug 11 12:55:50.560: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:55:50.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6573" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":275,"completed":228,"skipped":3687,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:55:50.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:56:02.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9083" for this suite.

• [SLOW TEST:11.564 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":229,"skipped":3708,"failed":0}
SS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:56:02.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-projected-all-test-volume-65df3a0a-1f61-4600-9341-cde66f2ba380
STEP: Creating secret with name secret-projected-all-test-volume-d58315fb-4ae1-4148-ace9-2c6f8d175a1a
STEP: Creating a pod to test Check all projections for projected volume plugin
Aug 11 12:56:02.295: INFO: Waiting up to 5m0s for pod "projected-volume-1b297707-6afe-428a-8b06-27a214d68c9b" in namespace "projected-6736" to be "Succeeded or Failed"
Aug 11 12:56:02.299: INFO: Pod "projected-volume-1b297707-6afe-428a-8b06-27a214d68c9b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.937049ms
Aug 11 12:56:04.570: INFO: Pod "projected-volume-1b297707-6afe-428a-8b06-27a214d68c9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.275477214s
Aug 11 12:56:06.631: INFO: Pod "projected-volume-1b297707-6afe-428a-8b06-27a214d68c9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.336183746s
STEP: Saw pod success
Aug 11 12:56:06.631: INFO: Pod "projected-volume-1b297707-6afe-428a-8b06-27a214d68c9b" satisfied condition "Succeeded or Failed"
Aug 11 12:56:06.634: INFO: Trying to get logs from node kali-worker pod projected-volume-1b297707-6afe-428a-8b06-27a214d68c9b container projected-all-volume-test: 
STEP: delete the pod
Aug 11 12:56:06.672: INFO: Waiting for pod projected-volume-1b297707-6afe-428a-8b06-27a214d68c9b to disappear
Aug 11 12:56:06.688: INFO: Pod projected-volume-1b297707-6afe-428a-8b06-27a214d68c9b no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:56:06.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6736" for this suite.
•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":230,"skipped":3710,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:56:06.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 11 12:56:08.458: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 11 12:56:10.643: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747368, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747368, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747368, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747368, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:56:12.647: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747368, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747368, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747368, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747368, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 11 12:56:15.713: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 12:56:15.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4943-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:56:16.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6125" for this suite.
STEP: Destroying namespace "webhook-6125-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.269 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":231,"skipped":3734,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:56:16.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 11 12:56:19.119: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 11 12:56:21.305: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747379, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747379, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747380, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747378, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:56:23.756: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747379, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747379, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747380, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747378, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:56:25.659: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747379, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747379, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747380, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747378, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:56:27.331: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747379, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747379, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747380, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747378, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 11 12:56:30.810: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Aug 11 12:56:36.856: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config attach --namespace=webhook-5288 to-be-attached-pod -i -c=container1'
Aug 11 12:56:36.963: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:56:36.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5288" for this suite.
STEP: Destroying namespace "webhook-5288-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:20.120 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":232,"skipped":3745,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:56:37.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 11 12:56:38.122: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 11 12:56:40.131: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747398, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747398, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747398, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747397, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:56:42.175: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747398, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747398, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747398, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747397, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 12:56:44.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747398, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747398, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747398, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747397, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 11 12:56:47.169: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 12:56:47.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6097-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:56:49.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1949" for this suite.
STEP: Destroying namespace "webhook-1949-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.627 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":233,"skipped":3752,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:56:49.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-9380
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 11 12:56:50.266: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Aug 11 12:56:50.984: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 11 12:56:53.169: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 11 12:56:55.182: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 11 12:56:57.050: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 11 12:56:58.988: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 11 12:57:00.987: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 11 12:57:03.190: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 11 12:57:05.038: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 11 12:57:06.987: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 11 12:57:09.081: INFO: The status of Pod netserver-0 is Running (Ready = true)
Aug 11 12:57:09.200: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug 11 12:57:11.203: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Aug 11 12:57:15.244: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.181:8080/dial?request=hostname&protocol=udp&host=10.244.2.180&port=8081&tries=1'] Namespace:pod-network-test-9380 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 11 12:57:15.244: INFO: >>> kubeConfig: /root/.kube/config
I0811 12:57:15.274713       7 log.go:172] (0xc0027ff600) (0xc000501180) Create stream
I0811 12:57:15.274741       7 log.go:172] (0xc0027ff600) (0xc000501180) Stream added, broadcasting: 1
I0811 12:57:15.276461       7 log.go:172] (0xc0027ff600) Reply frame received for 1
I0811 12:57:15.276517       7 log.go:172] (0xc0027ff600) (0xc000501540) Create stream
I0811 12:57:15.276532       7 log.go:172] (0xc0027ff600) (0xc000501540) Stream added, broadcasting: 3
I0811 12:57:15.277653       7 log.go:172] (0xc0027ff600) Reply frame received for 3
I0811 12:57:15.277688       7 log.go:172] (0xc0027ff600) (0xc001020000) Create stream
I0811 12:57:15.277704       7 log.go:172] (0xc0027ff600) (0xc001020000) Stream added, broadcasting: 5
I0811 12:57:15.278733       7 log.go:172] (0xc0027ff600) Reply frame received for 5
I0811 12:57:15.347875       7 log.go:172] (0xc0027ff600) Data frame received for 3
I0811 12:57:15.347897       7 log.go:172] (0xc000501540) (3) Data frame handling
I0811 12:57:15.347909       7 log.go:172] (0xc000501540) (3) Data frame sent
I0811 12:57:15.348370       7 log.go:172] (0xc0027ff600) Data frame received for 3
I0811 12:57:15.348385       7 log.go:172] (0xc000501540) (3) Data frame handling
I0811 12:57:15.348417       7 log.go:172] (0xc0027ff600) Data frame received for 5
I0811 12:57:15.348438       7 log.go:172] (0xc001020000) (5) Data frame handling
I0811 12:57:15.349721       7 log.go:172] (0xc0027ff600) Data frame received for 1
I0811 12:57:15.349734       7 log.go:172] (0xc000501180) (1) Data frame handling
I0811 12:57:15.349743       7 log.go:172] (0xc000501180) (1) Data frame sent
I0811 12:57:15.349755       7 log.go:172] (0xc0027ff600) (0xc000501180) Stream removed, broadcasting: 1
I0811 12:57:15.349770       7 log.go:172] (0xc0027ff600) Go away received
I0811 12:57:15.349886       7 log.go:172] (0xc0027ff600) (0xc000501180) Stream removed, broadcasting: 1
I0811 12:57:15.349906       7 log.go:172] (0xc0027ff600) (0xc000501540) Stream removed, broadcasting: 3
I0811 12:57:15.349913       7 log.go:172] (0xc0027ff600) (0xc001020000) Stream removed, broadcasting: 5
Aug 11 12:57:15.349: INFO: Waiting for responses: map[]
Aug 11 12:57:15.352: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.181:8080/dial?request=hostname&protocol=udp&host=10.244.1.161&port=8081&tries=1'] Namespace:pod-network-test-9380 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 11 12:57:15.352: INFO: >>> kubeConfig: /root/.kube/config
I0811 12:57:15.374916       7 log.go:172] (0xc0027ffb80) (0xc002268320) Create stream
I0811 12:57:15.374936       7 log.go:172] (0xc0027ffb80) (0xc002268320) Stream added, broadcasting: 1
I0811 12:57:15.376192       7 log.go:172] (0xc0027ffb80) Reply frame received for 1
I0811 12:57:15.376249       7 log.go:172] (0xc0027ffb80) (0xc002268500) Create stream
I0811 12:57:15.376272       7 log.go:172] (0xc0027ffb80) (0xc002268500) Stream added, broadcasting: 3
I0811 12:57:15.377117       7 log.go:172] (0xc0027ffb80) Reply frame received for 3
I0811 12:57:15.377146       7 log.go:172] (0xc0027ffb80) (0xc0028c4960) Create stream
I0811 12:57:15.377158       7 log.go:172] (0xc0027ffb80) (0xc0028c4960) Stream added, broadcasting: 5
I0811 12:57:15.377798       7 log.go:172] (0xc0027ffb80) Reply frame received for 5
I0811 12:57:15.464669       7 log.go:172] (0xc0027ffb80) Data frame received for 3
I0811 12:57:15.464686       7 log.go:172] (0xc002268500) (3) Data frame handling
I0811 12:57:15.464697       7 log.go:172] (0xc002268500) (3) Data frame sent
I0811 12:57:15.465377       7 log.go:172] (0xc0027ffb80) Data frame received for 5
I0811 12:57:15.465400       7 log.go:172] (0xc0028c4960) (5) Data frame handling
I0811 12:57:15.465661       7 log.go:172] (0xc0027ffb80) Data frame received for 3
I0811 12:57:15.465670       7 log.go:172] (0xc002268500) (3) Data frame handling
I0811 12:57:15.466957       7 log.go:172] (0xc0027ffb80) Data frame received for 1
I0811 12:57:15.466977       7 log.go:172] (0xc002268320) (1) Data frame handling
I0811 12:57:15.466984       7 log.go:172] (0xc002268320) (1) Data frame sent
I0811 12:57:15.466992       7 log.go:172] (0xc0027ffb80) (0xc002268320) Stream removed, broadcasting: 1
I0811 12:57:15.467002       7 log.go:172] (0xc0027ffb80) Go away received
I0811 12:57:15.467145       7 log.go:172] (0xc0027ffb80) (0xc002268320) Stream removed, broadcasting: 1
I0811 12:57:15.467173       7 log.go:172] (0xc0027ffb80) (0xc002268500) Stream removed, broadcasting: 3
I0811 12:57:15.467184       7 log.go:172] (0xc0027ffb80) (0xc0028c4960) Stream removed, broadcasting: 5
Aug 11 12:57:15.467: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:57:15.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9380" for this suite.

• [SLOW TEST:25.758 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":234,"skipped":3765,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:57:15.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-4600fe79-add8-4103-844c-e1a9152e50fe
STEP: Creating a pod to test consume configMaps
Aug 11 12:57:15.621: INFO: Waiting up to 5m0s for pod "pod-configmaps-3d23c973-ffd0-404f-88be-46ff258d16f5" in namespace "configmap-92" to be "Succeeded or Failed"
Aug 11 12:57:15.648: INFO: Pod "pod-configmaps-3d23c973-ffd0-404f-88be-46ff258d16f5": Phase="Pending", Reason="", readiness=false. Elapsed: 27.287805ms
Aug 11 12:57:17.651: INFO: Pod "pod-configmaps-3d23c973-ffd0-404f-88be-46ff258d16f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030201504s
Aug 11 12:57:19.726: INFO: Pod "pod-configmaps-3d23c973-ffd0-404f-88be-46ff258d16f5": Phase="Running", Reason="", readiness=true. Elapsed: 4.104987536s
Aug 11 12:57:21.801: INFO: Pod "pod-configmaps-3d23c973-ffd0-404f-88be-46ff258d16f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.180307704s
STEP: Saw pod success
Aug 11 12:57:21.801: INFO: Pod "pod-configmaps-3d23c973-ffd0-404f-88be-46ff258d16f5" satisfied condition "Succeeded or Failed"
Aug 11 12:57:21.803: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-3d23c973-ffd0-404f-88be-46ff258d16f5 container configmap-volume-test: 
STEP: delete the pod
Aug 11 12:57:22.593: INFO: Waiting for pod pod-configmaps-3d23c973-ffd0-404f-88be-46ff258d16f5 to disappear
Aug 11 12:57:22.903: INFO: Pod pod-configmaps-3d23c973-ffd0-404f-88be-46ff258d16f5 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:57:22.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-92" for this suite.

• [SLOW TEST:7.722 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":235,"skipped":3802,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:57:23.197: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 11 12:57:24.089: INFO: Waiting up to 5m0s for pod "pod-743eefd6-cb31-4129-959a-1ab6d56974e2" in namespace "emptydir-6441" to be "Succeeded or Failed"
Aug 11 12:57:24.668: INFO: Pod "pod-743eefd6-cb31-4129-959a-1ab6d56974e2": Phase="Pending", Reason="", readiness=false. Elapsed: 578.850121ms
Aug 11 12:57:26.672: INFO: Pod "pod-743eefd6-cb31-4129-959a-1ab6d56974e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.582691109s
Aug 11 12:57:28.675: INFO: Pod "pod-743eefd6-cb31-4129-959a-1ab6d56974e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.58616947s
Aug 11 12:57:30.679: INFO: Pod "pod-743eefd6-cb31-4129-959a-1ab6d56974e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.590022745s
STEP: Saw pod success
Aug 11 12:57:30.679: INFO: Pod "pod-743eefd6-cb31-4129-959a-1ab6d56974e2" satisfied condition "Succeeded or Failed"
Aug 11 12:57:30.681: INFO: Trying to get logs from node kali-worker2 pod pod-743eefd6-cb31-4129-959a-1ab6d56974e2 container test-container: 
STEP: delete the pod
Aug 11 12:57:30.864: INFO: Waiting for pod pod-743eefd6-cb31-4129-959a-1ab6d56974e2 to disappear
Aug 11 12:57:30.882: INFO: Pod pod-743eefd6-cb31-4129-959a-1ab6d56974e2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:57:30.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6441" for this suite.

• [SLOW TEST:7.691 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":236,"skipped":3836,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:57:30.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:57:43.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2092" for this suite.

• [SLOW TEST:12.482 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":237,"skipped":3844,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:57:43.370: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 12:57:43.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Aug 11 12:57:44.082: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-11T12:57:44Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-11T12:57:44Z]] name:name1 resourceVersion:8572127 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:85dd461f-1794-405c-911b-880ef64bbd30] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Aug 11 12:57:54.088: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-11T12:57:54Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-11T12:57:54Z]] name:name2 resourceVersion:8572167 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:4c683716-90c0-4a18-b7ec-fe797d492a5b] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Aug 11 12:58:04.094: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-11T12:57:44Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-11T12:58:04Z]] name:name1 resourceVersion:8572197 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:85dd461f-1794-405c-911b-880ef64bbd30] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Aug 11 12:58:14.099: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-11T12:57:54Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-11T12:58:14Z]] name:name2 resourceVersion:8572227 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:4c683716-90c0-4a18-b7ec-fe797d492a5b] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Aug 11 12:58:24.515: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-11T12:57:44Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-11T12:58:04Z]] name:name1 resourceVersion:8572257 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:85dd461f-1794-405c-911b-880ef64bbd30] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Aug 11 12:58:34.522: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-11T12:57:54Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-11T12:58:14Z]] name:name2 resourceVersion:8572282 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:4c683716-90c0-4a18-b7ec-fe797d492a5b] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:58:45.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-1543" for this suite.

• [SLOW TEST:61.667 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":238,"skipped":3869,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:58:45.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 11 12:58:45.238: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c6ae4663-14fa-4273-af11-8af044f91f37" in namespace "downward-api-6759" to be "Succeeded or Failed"
Aug 11 12:58:45.264: INFO: Pod "downwardapi-volume-c6ae4663-14fa-4273-af11-8af044f91f37": Phase="Pending", Reason="", readiness=false. Elapsed: 25.785483ms
Aug 11 12:58:47.268: INFO: Pod "downwardapi-volume-c6ae4663-14fa-4273-af11-8af044f91f37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030176421s
Aug 11 12:58:49.272: INFO: Pod "downwardapi-volume-c6ae4663-14fa-4273-af11-8af044f91f37": Phase="Running", Reason="", readiness=true. Elapsed: 4.033815879s
Aug 11 12:58:51.275: INFO: Pod "downwardapi-volume-c6ae4663-14fa-4273-af11-8af044f91f37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037358138s
STEP: Saw pod success
Aug 11 12:58:51.275: INFO: Pod "downwardapi-volume-c6ae4663-14fa-4273-af11-8af044f91f37" satisfied condition "Succeeded or Failed"
Aug 11 12:58:51.278: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-c6ae4663-14fa-4273-af11-8af044f91f37 container client-container: 
STEP: delete the pod
Aug 11 12:58:51.314: INFO: Waiting for pod downwardapi-volume-c6ae4663-14fa-4273-af11-8af044f91f37 to disappear
Aug 11 12:58:51.357: INFO: Pod downwardapi-volume-c6ae4663-14fa-4273-af11-8af044f91f37 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:58:51.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6759" for this suite.

• [SLOW TEST:6.327 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":239,"skipped":3876,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:58:51.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288
STEP: creating an pod
Aug 11 12:58:51.433: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-7730 -- logs-generator --log-lines-total 100 --run-duration 20s'
Aug 11 12:58:51.553: INFO: stderr: ""
Aug 11 12:58:51.553: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Waiting for log generator to start.
Aug 11 12:58:51.553: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Aug 11 12:58:51.553: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-7730" to be "running and ready, or succeeded"
Aug 11 12:58:51.594: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 41.158864ms
Aug 11 12:58:53.600: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046327169s
Aug 11 12:58:55.710: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157083801s
Aug 11 12:58:57.713: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 6.159970959s
Aug 11 12:58:57.713: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Aug 11 12:58:57.713: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Aug 11 12:58:57.713: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7730'
Aug 11 12:58:57.867: INFO: stderr: ""
Aug 11 12:58:57.867: INFO: stdout: "I0811 12:58:56.152381       1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/7g58 418\nI0811 12:58:56.352472       1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/pdd8 311\nI0811 12:58:56.552541       1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/qgn 351\nI0811 12:58:56.752560       1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/7l7 313\nI0811 12:58:56.952512       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/rrj 392\nI0811 12:58:57.152450       1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/nljx 387\nI0811 12:58:57.352517       1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/n6f 471\nI0811 12:58:57.552474       1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/w8wc 274\nI0811 12:58:57.752485       1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/4nrj 358\n"
STEP: limiting log lines
Aug 11 12:58:57.868: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7730 --tail=1'
Aug 11 12:58:57.967: INFO: stderr: ""
Aug 11 12:58:57.967: INFO: stdout: "I0811 12:58:57.952469       1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/48w 235\n"
Aug 11 12:58:57.967: INFO: got output "I0811 12:58:57.952469       1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/48w 235\n"
STEP: limiting log bytes
Aug 11 12:58:57.967: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7730 --limit-bytes=1'
Aug 11 12:58:58.062: INFO: stderr: ""
Aug 11 12:58:58.062: INFO: stdout: "I"
Aug 11 12:58:58.062: INFO: got output "I"
STEP: exposing timestamps
Aug 11 12:58:58.062: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7730 --tail=1 --timestamps'
Aug 11 12:58:58.154: INFO: stderr: ""
Aug 11 12:58:58.154: INFO: stdout: "2020-08-11T12:58:57.952603946Z I0811 12:58:57.952469       1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/48w 235\n"
Aug 11 12:58:58.154: INFO: got output "2020-08-11T12:58:57.952603946Z I0811 12:58:57.952469       1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/48w 235\n"
STEP: restricting to a time range
Aug 11 12:59:00.654: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7730 --since=1s'
Aug 11 12:59:00.752: INFO: stderr: ""
Aug 11 12:59:00.752: INFO: stdout: "I0811 12:58:59.752475       1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/87h 213\nI0811 12:58:59.952523       1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/nn9 599\nI0811 12:59:00.152538       1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/bdg8 350\nI0811 12:59:00.352512       1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/x89w 235\nI0811 12:59:00.552492       1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/kxt9 310\n"
Aug 11 12:59:00.752: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7730 --since=24h'
Aug 11 12:59:00.916: INFO: stderr: ""
Aug 11 12:59:00.916: INFO: stdout: "I0811 12:58:56.152381       1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/7g58 418\nI0811 12:58:56.352472       1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/pdd8 311\nI0811 12:58:56.552541       1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/qgn 351\nI0811 12:58:56.752560       1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/7l7 313\nI0811 12:58:56.952512       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/rrj 392\nI0811 12:58:57.152450       1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/nljx 387\nI0811 12:58:57.352517       1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/n6f 471\nI0811 12:58:57.552474       1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/w8wc 274\nI0811 12:58:57.752485       1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/4nrj 358\nI0811 12:58:57.952469       1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/48w 235\nI0811 12:58:58.152504       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/2m6 475\nI0811 12:58:58.352548       1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/5qt 452\nI0811 12:58:58.552509       1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/kzq 596\nI0811 12:58:58.752522       1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/bhc 395\nI0811 12:58:58.952527       1 logs_generator.go:76] 14 PUT /api/v1/namespaces/kube-system/pods/dq8 572\nI0811 12:58:59.152540       1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/sxw 439\nI0811 12:58:59.352521       1 logs_generator.go:76] 16 GET /api/v1/namespaces/kube-system/pods/njq 322\nI0811 12:58:59.552549       1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/n97 230\nI0811 12:58:59.752475       1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/87h 213\nI0811 12:58:59.952523       1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/nn9 599\nI0811 12:59:00.152538       1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/bdg8 350\nI0811 12:59:00.352512       1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/x89w 235\nI0811 12:59:00.552492       1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/kxt9 310\nI0811 12:59:00.752510       1 logs_generator.go:76] 23 GET /api/v1/namespaces/ns/pods/9h8 423\n"
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294
Aug 11 12:59:00.917: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-7730'
Aug 11 12:59:03.854: INFO: stderr: ""
Aug 11 12:59:03.854: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:59:03.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7730" for this suite.

• [SLOW TEST:12.512 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":275,"completed":240,"skipped":3900,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:59:03.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-9778
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 11 12:59:03.988: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Aug 11 12:59:04.110: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 11 12:59:06.607: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 11 12:59:08.386: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 11 12:59:10.153: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 11 12:59:12.113: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 11 12:59:14.113: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 11 12:59:16.131: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 11 12:59:18.113: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 11 12:59:20.118: INFO: The status of Pod netserver-0 is Running (Ready = true)
Aug 11 12:59:20.121: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug 11 12:59:22.765: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug 11 12:59:24.124: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug 11 12:59:26.124: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug 11 12:59:28.125: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug 11 12:59:30.123: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Aug 11 12:59:36.200: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.183 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9778 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 11 12:59:36.200: INFO: >>> kubeConfig: /root/.kube/config
I0811 12:59:36.233011       7 log.go:172] (0xc0053104d0) (0xc001456be0) Create stream
I0811 12:59:36.233057       7 log.go:172] (0xc0053104d0) (0xc001456be0) Stream added, broadcasting: 1
I0811 12:59:36.234606       7 log.go:172] (0xc0053104d0) Reply frame received for 1
I0811 12:59:36.234641       7 log.go:172] (0xc0053104d0) (0xc001719a40) Create stream
I0811 12:59:36.234656       7 log.go:172] (0xc0053104d0) (0xc001719a40) Stream added, broadcasting: 3
I0811 12:59:36.235374       7 log.go:172] (0xc0053104d0) Reply frame received for 3
I0811 12:59:36.235393       7 log.go:172] (0xc0053104d0) (0xc0028c5f40) Create stream
I0811 12:59:36.235399       7 log.go:172] (0xc0053104d0) (0xc0028c5f40) Stream added, broadcasting: 5
I0811 12:59:36.236182       7 log.go:172] (0xc0053104d0) Reply frame received for 5
I0811 12:59:37.329640       7 log.go:172] (0xc0053104d0) Data frame received for 5
I0811 12:59:37.329695       7 log.go:172] (0xc0028c5f40) (5) Data frame handling
I0811 12:59:37.329749       7 log.go:172] (0xc0053104d0) Data frame received for 3
I0811 12:59:37.329776       7 log.go:172] (0xc001719a40) (3) Data frame handling
I0811 12:59:37.329790       7 log.go:172] (0xc001719a40) (3) Data frame sent
I0811 12:59:37.329801       7 log.go:172] (0xc0053104d0) Data frame received for 3
I0811 12:59:37.329813       7 log.go:172] (0xc001719a40) (3) Data frame handling
I0811 12:59:37.331578       7 log.go:172] (0xc0053104d0) Data frame received for 1
I0811 12:59:37.331591       7 log.go:172] (0xc001456be0) (1) Data frame handling
I0811 12:59:37.331604       7 log.go:172] (0xc001456be0) (1) Data frame sent
I0811 12:59:37.331650       7 log.go:172] (0xc0053104d0) (0xc001456be0) Stream removed, broadcasting: 1
I0811 12:59:37.331726       7 log.go:172] (0xc0053104d0) (0xc001456be0) Stream removed, broadcasting: 1
I0811 12:59:37.331736       7 log.go:172] (0xc0053104d0) (0xc001719a40) Stream removed, broadcasting: 3
I0811 12:59:37.331797       7 log.go:172] (0xc0053104d0) Go away received
I0811 12:59:37.331882       7 log.go:172] (0xc0053104d0) (0xc0028c5f40) Stream removed, broadcasting: 5
Aug 11 12:59:37.331: INFO: Found all expected endpoints: [netserver-0]
Aug 11 12:59:37.334: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.166 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9778 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 11 12:59:37.334: INFO: >>> kubeConfig: /root/.kube/config
I0811 12:59:37.357654       7 log.go:172] (0xc002d62d10) (0xc00166ec80) Create stream
I0811 12:59:37.357680       7 log.go:172] (0xc002d62d10) (0xc00166ec80) Stream added, broadcasting: 1
I0811 12:59:37.359199       7 log.go:172] (0xc002d62d10) Reply frame received for 1
I0811 12:59:37.359247       7 log.go:172] (0xc002d62d10) (0xc001456c80) Create stream
I0811 12:59:37.359257       7 log.go:172] (0xc002d62d10) (0xc001456c80) Stream added, broadcasting: 3
I0811 12:59:37.360132       7 log.go:172] (0xc002d62d10) Reply frame received for 3
I0811 12:59:37.360166       7 log.go:172] (0xc002d62d10) (0xc001dc00a0) Create stream
I0811 12:59:37.360187       7 log.go:172] (0xc002d62d10) (0xc001dc00a0) Stream added, broadcasting: 5
I0811 12:59:37.361487       7 log.go:172] (0xc002d62d10) Reply frame received for 5
I0811 12:59:38.418068       7 log.go:172] (0xc002d62d10) Data frame received for 3
I0811 12:59:38.418086       7 log.go:172] (0xc001456c80) (3) Data frame handling
I0811 12:59:38.418094       7 log.go:172] (0xc001456c80) (3) Data frame sent
I0811 12:59:38.418572       7 log.go:172] (0xc002d62d10) Data frame received for 5
I0811 12:59:38.418589       7 log.go:172] (0xc001dc00a0) (5) Data frame handling
I0811 12:59:38.418620       7 log.go:172] (0xc002d62d10) Data frame received for 3
I0811 12:59:38.418632       7 log.go:172] (0xc001456c80) (3) Data frame handling
I0811 12:59:38.419640       7 log.go:172] (0xc002d62d10) Data frame received for 1
I0811 12:59:38.419657       7 log.go:172] (0xc00166ec80) (1) Data frame handling
I0811 12:59:38.419666       7 log.go:172] (0xc00166ec80) (1) Data frame sent
I0811 12:59:38.419752       7 log.go:172] (0xc002d62d10) (0xc00166ec80) Stream removed, broadcasting: 1
I0811 12:59:38.419779       7 log.go:172] (0xc002d62d10) Go away received
I0811 12:59:38.419914       7 log.go:172] (0xc002d62d10) (0xc00166ec80) Stream removed, broadcasting: 1
I0811 12:59:38.419948       7 log.go:172] (0xc002d62d10) (0xc001456c80) Stream removed, broadcasting: 3
I0811 12:59:38.419965       7 log.go:172] (0xc002d62d10) (0xc001dc00a0) Stream removed, broadcasting: 5
Aug 11 12:59:38.419: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 12:59:38.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9778" for this suite.

• [SLOW TEST:34.549 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":241,"skipped":3927,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 12:59:38.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-55df317e-6988-4baf-bce8-93a5df06fd14 in namespace container-probe-1968
Aug 11 12:59:42.554: INFO: Started pod liveness-55df317e-6988-4baf-bce8-93a5df06fd14 in namespace container-probe-1968
STEP: checking the pod's current state and verifying that restartCount is present
Aug 11 12:59:42.556: INFO: Initial restart count of pod liveness-55df317e-6988-4baf-bce8-93a5df06fd14 is 0
Aug 11 13:00:02.820: INFO: Restart count of pod container-probe-1968/liveness-55df317e-6988-4baf-bce8-93a5df06fd14 is now 1 (20.263987825s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:00:02.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1968" for this suite.

• [SLOW TEST:24.519 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":242,"skipped":3935,"failed":0}
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 13:00:02.946: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Aug 11 13:00:03.624: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 11 13:00:03.663: INFO: Waiting for terminating namespaces to be deleted...
Aug 11 13:00:03.665: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Aug 11 13:00:03.678: INFO: rally-824618b1-6cukkjuh-lb7rq from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:26 +0000 UTC (1 container statuses recorded)
Aug 11 13:00:03.678: INFO: 	Container rally-824618b1-6cukkjuh ready: true, restart count 3
Aug 11 13:00:03.678: INFO: rally-19e4df10-30wkw9yu-glqpf from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded)
Aug 11 13:00:03.678: INFO: 	Container rally-19e4df10-30wkw9yu ready: true, restart count 0
Aug 11 13:00:03.678: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded)
Aug 11 13:00:03.678: INFO: 	Container kindnet-cni ready: true, restart count 1
Aug 11 13:00:03.678: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded)
Aug 11 13:00:03.678: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 11 13:00:03.678: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Aug 11 13:00:03.682: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Aug 11 13:00:03.682: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 11 13:00:03.682: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Aug 11 13:00:03.682: INFO: 	Container kindnet-cni ready: true, restart count 1
Aug 11 13:00:03.682: INFO: rally-19e4df10-30wkw9yu-qbmr7 from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded)
Aug 11 13:00:03.682: INFO: 	Container rally-19e4df10-30wkw9yu ready: true, restart count 0
Aug 11 13:00:03.682: INFO: rally-824618b1-6cukkjuh-m84l4 from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:24 +0000 UTC (1 container statuses recorded)
Aug 11 13:00:03.682: INFO: 	Container rally-824618b1-6cukkjuh ready: true, restart count 3
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-645d7cfb-d815-4ae5-a5c5-53b36eba048c 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-645d7cfb-d815-4ae5-a5c5-53b36eba048c off the node kali-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-645d7cfb-d815-4ae5-a5c5-53b36eba048c
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:00:17.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9193" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:14.217 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":275,"completed":243,"skipped":3942,"failed":0}
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 13:00:17.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service multi-endpoint-test in namespace services-2110
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2110 to expose endpoints map[]
Aug 11 13:00:17.435: INFO: Get endpoints failed (133.355576ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Aug 11 13:00:18.439: INFO: successfully validated that service multi-endpoint-test in namespace services-2110 exposes endpoints map[] (1.137758152s elapsed)
STEP: Creating pod pod1 in namespace services-2110
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2110 to expose endpoints map[pod1:[100]]
Aug 11 13:00:23.879: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (5.432689981s elapsed, will retry)
Aug 11 13:00:25.894: INFO: successfully validated that service multi-endpoint-test in namespace services-2110 exposes endpoints map[pod1:[100]] (7.447766831s elapsed)
STEP: Creating pod pod2 in namespace services-2110
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2110 to expose endpoints map[pod1:[100] pod2:[101]]
Aug 11 13:00:30.298: INFO: successfully validated that service multi-endpoint-test in namespace services-2110 exposes endpoints map[pod1:[100] pod2:[101]] (4.40148198s elapsed)
STEP: Deleting pod pod1 in namespace services-2110
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2110 to expose endpoints map[pod2:[101]]
Aug 11 13:00:31.376: INFO: successfully validated that service multi-endpoint-test in namespace services-2110 exposes endpoints map[pod2:[101]] (1.074780529s elapsed)
STEP: Deleting pod pod2 in namespace services-2110
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2110 to expose endpoints map[]
Aug 11 13:00:32.495: INFO: successfully validated that service multi-endpoint-test in namespace services-2110 exposes endpoints map[] (1.114099708s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:00:32.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2110" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:15.553 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":275,"completed":244,"skipped":3942,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 13:00:32.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 11 13:00:33.253: INFO: Waiting up to 5m0s for pod "pod-5a955f15-8aa9-47fe-b8d8-58cc6edd46a3" in namespace "emptydir-1486" to be "Succeeded or Failed"
Aug 11 13:00:33.520: INFO: Pod "pod-5a955f15-8aa9-47fe-b8d8-58cc6edd46a3": Phase="Pending", Reason="", readiness=false. Elapsed: 267.318956ms
Aug 11 13:00:35.524: INFO: Pod "pod-5a955f15-8aa9-47fe-b8d8-58cc6edd46a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.270561191s
Aug 11 13:00:37.998: INFO: Pod "pod-5a955f15-8aa9-47fe-b8d8-58cc6edd46a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.745373522s
Aug 11 13:00:40.022: INFO: Pod "pod-5a955f15-8aa9-47fe-b8d8-58cc6edd46a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.76856536s
STEP: Saw pod success
Aug 11 13:00:40.022: INFO: Pod "pod-5a955f15-8aa9-47fe-b8d8-58cc6edd46a3" satisfied condition "Succeeded or Failed"
Aug 11 13:00:40.025: INFO: Trying to get logs from node kali-worker2 pod pod-5a955f15-8aa9-47fe-b8d8-58cc6edd46a3 container test-container: 
STEP: delete the pod
Aug 11 13:00:40.299: INFO: Waiting for pod pod-5a955f15-8aa9-47fe-b8d8-58cc6edd46a3 to disappear
Aug 11 13:00:40.305: INFO: Pod pod-5a955f15-8aa9-47fe-b8d8-58cc6edd46a3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:00:40.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1486" for this suite.

• [SLOW TEST:7.650 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":245,"skipped":3945,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 13:00:40.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 11 13:00:51.198: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 11 13:00:51.216: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 11 13:00:53.216: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 11 13:00:53.382: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 11 13:00:55.216: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 11 13:00:55.220: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 11 13:00:57.216: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 11 13:00:57.219: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:00:57.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9789" for this suite.

• [SLOW TEST:16.862 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":246,"skipped":3987,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 13:00:57.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 11 13:00:57.958: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 11 13:01:00.561: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747658, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747658, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747658, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747657, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 13:01:03.160: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747658, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747658, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747658, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747657, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 13:01:04.765: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747658, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747658, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747658, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747657, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 13:01:06.565: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747658, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747658, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747658, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732747657, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 11 13:01:09.648: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Aug 11 13:01:09.666: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:01:09.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1110" for this suite.
STEP: Destroying namespace "webhook-1110-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.807 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":247,"skipped":4041,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 13:01:11.037: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-configmap-stvh
STEP: Creating a pod to test atomic-volume-subpath
Aug 11 13:01:11.674: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-stvh" in namespace "subpath-1640" to be "Succeeded or Failed"
Aug 11 13:01:11.837: INFO: Pod "pod-subpath-test-configmap-stvh": Phase="Pending", Reason="", readiness=false. Elapsed: 162.717251ms
Aug 11 13:01:14.211: INFO: Pod "pod-subpath-test-configmap-stvh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.537400088s
Aug 11 13:01:16.220: INFO: Pod "pod-subpath-test-configmap-stvh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.546273149s
Aug 11 13:01:18.226: INFO: Pod "pod-subpath-test-configmap-stvh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.551562662s
Aug 11 13:01:20.229: INFO: Pod "pod-subpath-test-configmap-stvh": Phase="Running", Reason="", readiness=true. Elapsed: 8.554554782s
Aug 11 13:01:22.291: INFO: Pod "pod-subpath-test-configmap-stvh": Phase="Running", Reason="", readiness=true. Elapsed: 10.617267708s
Aug 11 13:01:24.294: INFO: Pod "pod-subpath-test-configmap-stvh": Phase="Running", Reason="", readiness=true. Elapsed: 12.620224904s
Aug 11 13:01:26.298: INFO: Pod "pod-subpath-test-configmap-stvh": Phase="Running", Reason="", readiness=true. Elapsed: 14.623589845s
Aug 11 13:01:28.302: INFO: Pod "pod-subpath-test-configmap-stvh": Phase="Running", Reason="", readiness=true. Elapsed: 16.627967768s
Aug 11 13:01:30.305: INFO: Pod "pod-subpath-test-configmap-stvh": Phase="Running", Reason="", readiness=true. Elapsed: 18.631297377s
Aug 11 13:01:32.309: INFO: Pod "pod-subpath-test-configmap-stvh": Phase="Running", Reason="", readiness=true. Elapsed: 20.635102958s
Aug 11 13:01:34.313: INFO: Pod "pod-subpath-test-configmap-stvh": Phase="Running", Reason="", readiness=true. Elapsed: 22.63881274s
Aug 11 13:01:36.333: INFO: Pod "pod-subpath-test-configmap-stvh": Phase="Running", Reason="", readiness=true. Elapsed: 24.659259498s
Aug 11 13:01:38.337: INFO: Pod "pod-subpath-test-configmap-stvh": Phase="Running", Reason="", readiness=true. Elapsed: 26.662826673s
Aug 11 13:01:40.341: INFO: Pod "pod-subpath-test-configmap-stvh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.66690546s
STEP: Saw pod success
Aug 11 13:01:40.341: INFO: Pod "pod-subpath-test-configmap-stvh" satisfied condition "Succeeded or Failed"
Aug 11 13:01:40.343: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-configmap-stvh container test-container-subpath-configmap-stvh: 
STEP: delete the pod
Aug 11 13:01:40.541: INFO: Waiting for pod pod-subpath-test-configmap-stvh to disappear
Aug 11 13:01:40.735: INFO: Pod pod-subpath-test-configmap-stvh no longer exists
STEP: Deleting pod pod-subpath-test-configmap-stvh
Aug 11 13:01:40.735: INFO: Deleting pod "pod-subpath-test-configmap-stvh" in namespace "subpath-1640"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:01:40.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1640" for this suite.

• [SLOW TEST:29.705 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":248,"skipped":4107,"failed":0}
SSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 13:01:40.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-384
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-384
STEP: creating replication controller externalsvc in namespace services-384
I0811 13:01:41.117428       7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-384, replica count: 2
I0811 13:01:44.167729       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0811 13:01:47.167925       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Aug 11 13:01:47.814: INFO: Creating new exec pod
Aug 11 13:01:51.884: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-384 execpod6p4qx -- /bin/sh -x -c nslookup clusterip-service'
Aug 11 13:01:57.792: INFO: stderr: "I0811 13:01:57.622598    2794 log.go:172] (0xc000d12b00) (0xc000f98280) Create stream\nI0811 13:01:57.622678    2794 log.go:172] (0xc000d12b00) (0xc000f98280) Stream added, broadcasting: 1\nI0811 13:01:57.625799    2794 log.go:172] (0xc000d12b00) Reply frame received for 1\nI0811 13:01:57.625844    2794 log.go:172] (0xc000d12b00) (0xc000f983c0) Create stream\nI0811 13:01:57.625856    2794 log.go:172] (0xc000d12b00) (0xc000f983c0) Stream added, broadcasting: 3\nI0811 13:01:57.626779    2794 log.go:172] (0xc000d12b00) Reply frame received for 3\nI0811 13:01:57.626813    2794 log.go:172] (0xc000d12b00) (0xc000f98460) Create stream\nI0811 13:01:57.626828    2794 log.go:172] (0xc000d12b00) (0xc000f98460) Stream added, broadcasting: 5\nI0811 13:01:57.627840    2794 log.go:172] (0xc000d12b00) Reply frame received for 5\nI0811 13:01:57.687430    2794 log.go:172] (0xc000d12b00) Data frame received for 5\nI0811 13:01:57.687456    2794 log.go:172] (0xc000f98460) (5) Data frame handling\nI0811 13:01:57.687471    2794 log.go:172] (0xc000f98460) (5) Data frame sent\n+ nslookup clusterip-service\nI0811 13:01:57.785031    2794 log.go:172] (0xc000d12b00) Data frame received for 3\nI0811 13:01:57.785048    2794 log.go:172] (0xc000f983c0) (3) Data frame handling\nI0811 13:01:57.785057    2794 log.go:172] (0xc000f983c0) (3) Data frame sent\nI0811 13:01:57.785949    2794 log.go:172] (0xc000d12b00) Data frame received for 3\nI0811 13:01:57.785964    2794 log.go:172] (0xc000f983c0) (3) Data frame handling\nI0811 13:01:57.785974    2794 log.go:172] (0xc000f983c0) (3) Data frame sent\nI0811 13:01:57.786383    2794 log.go:172] (0xc000d12b00) Data frame received for 3\nI0811 13:01:57.786400    2794 log.go:172] (0xc000f983c0) (3) Data frame handling\nI0811 13:01:57.786467    2794 log.go:172] (0xc000d12b00) Data frame received for 5\nI0811 13:01:57.786478    2794 log.go:172] (0xc000f98460) (5) Data frame handling\nI0811 13:01:57.787681    2794 log.go:172] (0xc000d12b00) Data frame received for 1\nI0811 13:01:57.787700    2794 log.go:172] (0xc000f98280) (1) Data frame handling\nI0811 13:01:57.787712    2794 log.go:172] (0xc000f98280) (1) Data frame sent\nI0811 13:01:57.787734    2794 log.go:172] (0xc000d12b00) (0xc000f98280) Stream removed, broadcasting: 1\nI0811 13:01:57.787747    2794 log.go:172] (0xc000d12b00) Go away received\nI0811 13:01:57.787990    2794 log.go:172] (0xc000d12b00) (0xc000f98280) Stream removed, broadcasting: 1\nI0811 13:01:57.788000    2794 log.go:172] (0xc000d12b00) (0xc000f983c0) Stream removed, broadcasting: 3\nI0811 13:01:57.788005    2794 log.go:172] (0xc000d12b00) (0xc000f98460) Stream removed, broadcasting: 5\n"
Aug 11 13:01:57.792: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-384.svc.cluster.local\tcanonical name = externalsvc.services-384.svc.cluster.local.\nName:\texternalsvc.services-384.svc.cluster.local\nAddress: 10.108.19.149\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-384, will wait for the garbage collector to delete the pods
Aug 11 13:01:57.850: INFO: Deleting ReplicationController externalsvc took: 4.859243ms
Aug 11 13:01:58.250: INFO: Terminating ReplicationController externalsvc pods took: 400.197106ms
Aug 11 13:02:03.977: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:02:03.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-384" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:23.258 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":249,"skipped":4112,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 13:02:04.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override command
Aug 11 13:02:04.096: INFO: Waiting up to 5m0s for pod "client-containers-3094821c-9608-48ee-a9f2-25442f0795f7" in namespace "containers-7273" to be "Succeeded or Failed"
Aug 11 13:02:04.117: INFO: Pod "client-containers-3094821c-9608-48ee-a9f2-25442f0795f7": Phase="Pending", Reason="", readiness=false. Elapsed: 21.165818ms
Aug 11 13:02:06.120: INFO: Pod "client-containers-3094821c-9608-48ee-a9f2-25442f0795f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024525891s
Aug 11 13:02:08.123: INFO: Pod "client-containers-3094821c-9608-48ee-a9f2-25442f0795f7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027556142s
Aug 11 13:02:10.207: INFO: Pod "client-containers-3094821c-9608-48ee-a9f2-25442f0795f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.111204359s
STEP: Saw pod success
Aug 11 13:02:10.207: INFO: Pod "client-containers-3094821c-9608-48ee-a9f2-25442f0795f7" satisfied condition "Succeeded or Failed"
Aug 11 13:02:10.358: INFO: Trying to get logs from node kali-worker pod client-containers-3094821c-9608-48ee-a9f2-25442f0795f7 container test-container: 
STEP: delete the pod
Aug 11 13:02:10.660: INFO: Waiting for pod client-containers-3094821c-9608-48ee-a9f2-25442f0795f7 to disappear
Aug 11 13:02:10.691: INFO: Pod client-containers-3094821c-9608-48ee-a9f2-25442f0795f7 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:02:10.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7273" for this suite.

• [SLOW TEST:6.698 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":250,"skipped":4123,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 13:02:10.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating all guestbook components
Aug 11 13:02:11.238: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Aug 11 13:02:11.238: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1107'
Aug 11 13:02:12.027: INFO: stderr: ""
Aug 11 13:02:12.027: INFO: stdout: "service/agnhost-slave created\n"
Aug 11 13:02:12.027: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Aug 11 13:02:12.027: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1107'
Aug 11 13:02:12.483: INFO: stderr: ""
Aug 11 13:02:12.483: INFO: stdout: "service/agnhost-master created\n"
Aug 11 13:02:12.484: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Aug 11 13:02:12.484: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1107'
Aug 11 13:02:12.767: INFO: stderr: ""
Aug 11 13:02:12.767: INFO: stdout: "service/frontend created\n"
Aug 11 13:02:12.767: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Aug 11 13:02:12.767: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1107'
Aug 11 13:02:13.005: INFO: stderr: ""
Aug 11 13:02:13.005: INFO: stdout: "deployment.apps/frontend created\n"
Aug 11 13:02:13.005: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 11 13:02:13.005: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1107'
Aug 11 13:02:13.321: INFO: stderr: ""
Aug 11 13:02:13.321: INFO: stdout: "deployment.apps/agnhost-master created\n"
Aug 11 13:02:13.321: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 11 13:02:13.321: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1107'
Aug 11 13:02:13.610: INFO: stderr: ""
Aug 11 13:02:13.611: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Aug 11 13:02:13.611: INFO: Waiting for all frontend pods to be Running.
Aug 11 13:02:23.661: INFO: Waiting for frontend to serve content.
Aug 11 13:02:24.691: INFO: Trying to add a new entry to the guestbook.
Aug 11 13:02:24.721: INFO: Verifying that added entry can be retrieved.
Aug 11 13:02:24.728: INFO: Failed to get response from guestbook. err: , response: {"data":""}
STEP: using delete to clean up resources
Aug 11 13:02:29.758: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1107'
Aug 11 13:02:29.951: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 11 13:02:29.951: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Aug 11 13:02:29.951: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1107'
Aug 11 13:02:30.118: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 11 13:02:30.118: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 11 13:02:30.119: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1107'
Aug 11 13:02:30.271: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 11 13:02:30.271: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 11 13:02:30.272: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1107'
Aug 11 13:02:30.379: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 11 13:02:30.379: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 11 13:02:30.379: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1107'
Aug 11 13:02:31.120: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 11 13:02:31.120: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 11 13:02:31.120: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1107'
Aug 11 13:02:31.344: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 11 13:02:31.344: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:02:31.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1107" for this suite.

• [SLOW TEST:21.040 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":275,"completed":251,"skipped":4126,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 13:02:31.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206
STEP: creating the pod
Aug 11 13:02:33.331: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3750'
Aug 11 13:02:34.273: INFO: stderr: ""
Aug 11 13:02:34.273: INFO: stdout: "pod/pause created\n"
Aug 11 13:02:34.273: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Aug 11 13:02:34.273: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3750" to be "running and ready"
Aug 11 13:02:34.569: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 296.066532ms
Aug 11 13:02:36.798: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.524876474s
Aug 11 13:02:38.819: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.546078911s
Aug 11 13:02:40.838: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.565138249s
Aug 11 13:02:43.005: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.731920127s
Aug 11 13:02:43.005: INFO: Pod "pause" satisfied condition "running and ready"
Aug 11 13:02:43.005: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: adding the label testing-label with value testing-label-value to a pod
Aug 11 13:02:43.005: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-3750'
Aug 11 13:02:43.108: INFO: stderr: ""
Aug 11 13:02:43.108: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Aug 11 13:02:43.108: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3750'
Aug 11 13:02:43.242: INFO: stderr: ""
Aug 11 13:02:43.242: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Aug 11 13:02:43.242: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-3750'
Aug 11 13:02:43.467: INFO: stderr: ""
Aug 11 13:02:43.467: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Aug 11 13:02:43.467: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3750'
Aug 11 13:02:43.707: INFO: stderr: ""
Aug 11 13:02:43.707: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213
STEP: using delete to clean up resources
Aug 11 13:02:43.707: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3750'
Aug 11 13:02:43.898: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 11 13:02:43.898: INFO: stdout: "pod \"pause\" force deleted\n"
Aug 11 13:02:43.898: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-3750'
Aug 11 13:02:43.991: INFO: stderr: "No resources found in kubectl-3750 namespace.\n"
Aug 11 13:02:43.991: INFO: stdout: ""
Aug 11 13:02:43.991: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-3750 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 11 13:02:44.077: INFO: stderr: ""
Aug 11 13:02:44.077: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:02:44.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3750" for this suite.

• [SLOW TEST:12.343 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":275,"completed":252,"skipped":4138,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 13:02:44.084: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:02:57.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1521" for this suite.

• [SLOW TEST:13.475 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":253,"skipped":4174,"failed":0}
S
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 13:02:57.560: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-96b1ca32-f070-4c19-95d4-8f0f5a4263bb
STEP: Creating secret with name s-test-opt-upd-120e56b8-9e2e-4871-8237-8dcfb0db4edf
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-96b1ca32-f070-4c19-95d4-8f0f5a4263bb
STEP: Updating secret s-test-opt-upd-120e56b8-9e2e-4871-8237-8dcfb0db4edf
STEP: Creating secret with name s-test-opt-create-3094ecf7-c14f-4527-8dc0-386fac1c93a4
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:04:34.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5854" for this suite.

• [SLOW TEST:96.568 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":254,"skipped":4175,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 13:04:34.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:04:45.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-327" for this suite.

• [SLOW TEST:11.298 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":255,"skipped":4248,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 13:04:45.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override arguments
Aug 11 13:04:45.483: INFO: Waiting up to 5m0s for pod "client-containers-93642a74-2a25-46e5-a2cc-62e43e5c3c55" in namespace "containers-6179" to be "Succeeded or Failed"
Aug 11 13:04:45.487: INFO: Pod "client-containers-93642a74-2a25-46e5-a2cc-62e43e5c3c55": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069158ms
Aug 11 13:04:47.557: INFO: Pod "client-containers-93642a74-2a25-46e5-a2cc-62e43e5c3c55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074130493s
Aug 11 13:04:49.560: INFO: Pod "client-containers-93642a74-2a25-46e5-a2cc-62e43e5c3c55": Phase="Running", Reason="", readiness=true. Elapsed: 4.07753078s
Aug 11 13:04:51.564: INFO: Pod "client-containers-93642a74-2a25-46e5-a2cc-62e43e5c3c55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.081167593s
STEP: Saw pod success
Aug 11 13:04:51.564: INFO: Pod "client-containers-93642a74-2a25-46e5-a2cc-62e43e5c3c55" satisfied condition "Succeeded or Failed"
Aug 11 13:04:51.566: INFO: Trying to get logs from node kali-worker2 pod client-containers-93642a74-2a25-46e5-a2cc-62e43e5c3c55 container test-container: 
STEP: delete the pod
Aug 11 13:04:51.627: INFO: Waiting for pod client-containers-93642a74-2a25-46e5-a2cc-62e43e5c3c55 to disappear
Aug 11 13:04:51.631: INFO: Pod client-containers-93642a74-2a25-46e5-a2cc-62e43e5c3c55 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:04:51.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6179" for this suite.

• [SLOW TEST:6.212 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":256,"skipped":4268,"failed":0}
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 13:04:51.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Aug 11 13:04:51.795: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 11 13:04:51.815: INFO: Waiting for terminating namespaces to be deleted...
Aug 11 13:04:51.818: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Aug 11 13:04:51.825: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded)
Aug 11 13:04:51.825: INFO: 	Container kindnet-cni ready: true, restart count 1
Aug 11 13:04:51.825: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded)
Aug 11 13:04:51.825: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 11 13:04:51.825: INFO: pod-secrets-6806c878-a2ef-4453-87f0-431c40328ae3 from secrets-5854 started at 2020-08-11 13:02:59 +0000 UTC (3 container statuses recorded)
Aug 11 13:04:51.825: INFO: 	Container creates-volume-test ready: false, restart count 0
Aug 11 13:04:51.825: INFO: 	Container dels-volume-test ready: false, restart count 0
Aug 11 13:04:51.825: INFO: 	Container upds-volume-test ready: false, restart count 0
Aug 11 13:04:51.825: INFO: rally-824618b1-6cukkjuh-lb7rq from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:26 +0000 UTC (1 container statuses recorded)
Aug 11 13:04:51.825: INFO: 	Container rally-824618b1-6cukkjuh ready: true, restart count 3
Aug 11 13:04:51.825: INFO: rally-19e4df10-30wkw9yu-glqpf from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded)
Aug 11 13:04:51.825: INFO: 	Container rally-19e4df10-30wkw9yu ready: true, restart count 0
Aug 11 13:04:51.825: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Aug 11 13:04:51.831: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Aug 11 13:04:51.831: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 11 13:04:51.831: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Aug 11 13:04:51.831: INFO: 	Container kindnet-cni ready: true, restart count 1
Aug 11 13:04:51.831: INFO: rally-19e4df10-30wkw9yu-qbmr7 from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded)
Aug 11 13:04:51.831: INFO: 	Container rally-19e4df10-30wkw9yu ready: true, restart count 0
Aug 11 13:04:51.831: INFO: rally-824618b1-6cukkjuh-m84l4 from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:24 +0000 UTC (1 container statuses recorded)
Aug 11 13:04:51.831: INFO: 	Container rally-824618b1-6cukkjuh ready: true, restart count 3
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-3f811532-591e-469e-972e-eae00887836d 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-3f811532-591e-469e-972e-eae00887836d off the node kali-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-3f811532-591e-469e-972e-eae00887836d
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:05:16.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5084" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:25.670 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":257,"skipped":4275,"failed":0}
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 13:05:17.309: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 11 13:05:28.976: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 11 13:05:28.993: INFO: Pod pod-with-poststart-http-hook still exists
Aug 11 13:05:30.994: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 11 13:05:30.997: INFO: Pod pod-with-poststart-http-hook still exists
Aug 11 13:05:32.993: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 11 13:05:32.996: INFO: Pod pod-with-poststart-http-hook still exists
Aug 11 13:05:34.994: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 11 13:05:34.998: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:05:34.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5366" for this suite.

• [SLOW TEST:17.697 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":258,"skipped":4276,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 13:05:35.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-7774705a-19da-42b9-b24a-cc63fc1b6ad2
STEP: Creating a pod to test consume secrets
Aug 11 13:05:35.152: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2a3a64d2-7377-409c-ad94-238d91523f45" in namespace "projected-4619" to be "Succeeded or Failed"
Aug 11 13:05:35.174: INFO: Pod "pod-projected-secrets-2a3a64d2-7377-409c-ad94-238d91523f45": Phase="Pending", Reason="", readiness=false. Elapsed: 22.151417ms
Aug 11 13:05:37.176: INFO: Pod "pod-projected-secrets-2a3a64d2-7377-409c-ad94-238d91523f45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024573758s
Aug 11 13:05:39.180: INFO: Pod "pod-projected-secrets-2a3a64d2-7377-409c-ad94-238d91523f45": Phase="Running", Reason="", readiness=true. Elapsed: 4.028526431s
Aug 11 13:05:41.187: INFO: Pod "pod-projected-secrets-2a3a64d2-7377-409c-ad94-238d91523f45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035360258s
STEP: Saw pod success
Aug 11 13:05:41.187: INFO: Pod "pod-projected-secrets-2a3a64d2-7377-409c-ad94-238d91523f45" satisfied condition "Succeeded or Failed"
Aug 11 13:05:41.189: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-2a3a64d2-7377-409c-ad94-238d91523f45 container projected-secret-volume-test: 
STEP: delete the pod
Aug 11 13:05:41.270: INFO: Waiting for pod pod-projected-secrets-2a3a64d2-7377-409c-ad94-238d91523f45 to disappear
Aug 11 13:05:41.312: INFO: Pod pod-projected-secrets-2a3a64d2-7377-409c-ad94-238d91523f45 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:05:41.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4619" for this suite.

• [SLOW TEST:6.310 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":259,"skipped":4288,"failed":0}
SSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 13:05:41.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 13:05:41.492: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-60ef9cc5-0491-4390-bb1e-317bf9491cf0" in namespace "security-context-test-6844" to be "Succeeded or Failed"
Aug 11 13:05:41.683: INFO: Pod "busybox-readonly-false-60ef9cc5-0491-4390-bb1e-317bf9491cf0": Phase="Pending", Reason="", readiness=false. Elapsed: 191.571886ms
Aug 11 13:05:43.686: INFO: Pod "busybox-readonly-false-60ef9cc5-0491-4390-bb1e-317bf9491cf0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194217576s
Aug 11 13:05:45.688: INFO: Pod "busybox-readonly-false-60ef9cc5-0491-4390-bb1e-317bf9491cf0": Phase="Running", Reason="", readiness=true. Elapsed: 4.196684763s
Aug 11 13:05:47.691: INFO: Pod "busybox-readonly-false-60ef9cc5-0491-4390-bb1e-317bf9491cf0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.199380036s
Aug 11 13:05:47.691: INFO: Pod "busybox-readonly-false-60ef9cc5-0491-4390-bb1e-317bf9491cf0" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:05:47.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6844" for this suite.

• [SLOW TEST:6.406 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  When creating a pod with readOnlyRootFilesystem
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":260,"skipped":4294,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 13:05:47.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0811 13:05:48.851508       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 11 13:05:48.851: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:05:48.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4912" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":261,"skipped":4308,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 13:05:48.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating pod
Aug 11 13:05:53.089: INFO: Pod pod-hostip-c5e31476-a2ea-4f44-b1c1-62ba5561ce8f has hostIP: 172.18.0.15
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:05:53.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5145" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":262,"skipped":4335,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 13:05:53.095: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 11 13:05:53.279: INFO: Waiting up to 5m0s for pod "pod-076ec8ac-f271-432f-9097-a7457f7719ea" in namespace "emptydir-938" to be "Succeeded or Failed"
Aug 11 13:05:53.292: INFO: Pod "pod-076ec8ac-f271-432f-9097-a7457f7719ea": Phase="Pending", Reason="", readiness=false. Elapsed: 12.704049ms
Aug 11 13:05:55.295: INFO: Pod "pod-076ec8ac-f271-432f-9097-a7457f7719ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01571102s
Aug 11 13:05:57.328: INFO: Pod "pod-076ec8ac-f271-432f-9097-a7457f7719ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049065769s
Aug 11 13:05:59.430: INFO: Pod "pod-076ec8ac-f271-432f-9097-a7457f7719ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.151149165s
STEP: Saw pod success
Aug 11 13:05:59.430: INFO: Pod "pod-076ec8ac-f271-432f-9097-a7457f7719ea" satisfied condition "Succeeded or Failed"
Aug 11 13:05:59.433: INFO: Trying to get logs from node kali-worker2 pod pod-076ec8ac-f271-432f-9097-a7457f7719ea container test-container: 
STEP: delete the pod
Aug 11 13:05:59.889: INFO: Waiting for pod pod-076ec8ac-f271-432f-9097-a7457f7719ea to disappear
Aug 11 13:06:00.061: INFO: Pod pod-076ec8ac-f271-432f-9097-a7457f7719ea no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:06:00.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-938" for this suite.

• [SLOW TEST:7.310 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":263,"skipped":4340,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 13:06:00.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test hostPath mode
Aug 11 13:06:00.989: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3943" to be "Succeeded or Failed"
Aug 11 13:06:01.078: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 89.060954ms
Aug 11 13:06:03.081: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091701339s
Aug 11 13:06:05.145: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155550477s
Aug 11 13:06:07.451: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.461531543s
STEP: Saw pod success
Aug 11 13:06:07.451: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Aug 11 13:06:07.454: INFO: Trying to get logs from node kali-worker pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Aug 11 13:06:07.758: INFO: Waiting for pod pod-host-path-test to disappear
Aug 11 13:06:07.802: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:06:07.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-3943" for this suite.

• [SLOW TEST:7.483 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":264,"skipped":4350,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 13:06:07.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test env composition
Aug 11 13:06:08.327: INFO: Waiting up to 5m0s for pod "var-expansion-4b597b4c-0c23-4157-9038-58f8c2ca9d02" in namespace "var-expansion-9690" to be "Succeeded or Failed"
Aug 11 13:06:08.359: INFO: Pod "var-expansion-4b597b4c-0c23-4157-9038-58f8c2ca9d02": Phase="Pending", Reason="", readiness=false. Elapsed: 31.910615ms
Aug 11 13:06:10.363: INFO: Pod "var-expansion-4b597b4c-0c23-4157-9038-58f8c2ca9d02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036157694s
Aug 11 13:06:12.648: INFO: Pod "var-expansion-4b597b4c-0c23-4157-9038-58f8c2ca9d02": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321300376s
Aug 11 13:06:14.651: INFO: Pod "var-expansion-4b597b4c-0c23-4157-9038-58f8c2ca9d02": Phase="Pending", Reason="", readiness=false. Elapsed: 6.324291372s
Aug 11 13:06:16.655: INFO: Pod "var-expansion-4b597b4c-0c23-4157-9038-58f8c2ca9d02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.328340666s
STEP: Saw pod success
Aug 11 13:06:16.655: INFO: Pod "var-expansion-4b597b4c-0c23-4157-9038-58f8c2ca9d02" satisfied condition "Succeeded or Failed"
Aug 11 13:06:16.658: INFO: Trying to get logs from node kali-worker2 pod var-expansion-4b597b4c-0c23-4157-9038-58f8c2ca9d02 container dapi-container: 
STEP: delete the pod
Aug 11 13:06:16.712: INFO: Waiting for pod var-expansion-4b597b4c-0c23-4157-9038-58f8c2ca9d02 to disappear
Aug 11 13:06:16.730: INFO: Pod var-expansion-4b597b4c-0c23-4157-9038-58f8c2ca9d02 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:06:16.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9690" for this suite.

• [SLOW TEST:8.850 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":265,"skipped":4365,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 13:06:16.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Aug 11 13:06:16.894: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:06:33.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5905" for this suite.

• [SLOW TEST:16.745 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":266,"skipped":4378,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 13:06:33.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name projected-secret-test-21312846-99b7-4c55-9119-54b2ee23a71d
STEP: Creating a pod to test consume secrets
Aug 11 13:06:33.642: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f5b1e5b3-b759-469e-837f-2285fa86cdae" in namespace "projected-234" to be "Succeeded or Failed"
Aug 11 13:06:33.684: INFO: Pod "pod-projected-secrets-f5b1e5b3-b759-469e-837f-2285fa86cdae": Phase="Pending", Reason="", readiness=false. Elapsed: 41.760834ms
Aug 11 13:06:35.783: INFO: Pod "pod-projected-secrets-f5b1e5b3-b759-469e-837f-2285fa86cdae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140585318s
Aug 11 13:06:37.821: INFO: Pod "pod-projected-secrets-f5b1e5b3-b759-469e-837f-2285fa86cdae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178728337s
Aug 11 13:06:39.850: INFO: Pod "pod-projected-secrets-f5b1e5b3-b759-469e-837f-2285fa86cdae": Phase="Running", Reason="", readiness=true. Elapsed: 6.207941924s
Aug 11 13:06:41.855: INFO: Pod "pod-projected-secrets-f5b1e5b3-b759-469e-837f-2285fa86cdae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.212633823s
STEP: Saw pod success
Aug 11 13:06:41.855: INFO: Pod "pod-projected-secrets-f5b1e5b3-b759-469e-837f-2285fa86cdae" satisfied condition "Succeeded or Failed"
Aug 11 13:06:41.858: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-f5b1e5b3-b759-469e-837f-2285fa86cdae container secret-volume-test: 
STEP: delete the pod
Aug 11 13:06:41.916: INFO: Waiting for pod pod-projected-secrets-f5b1e5b3-b759-469e-837f-2285fa86cdae to disappear
Aug 11 13:06:42.049: INFO: Pod pod-projected-secrets-f5b1e5b3-b759-469e-837f-2285fa86cdae no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:06:42.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-234" for this suite.

• [SLOW TEST:8.573 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":267,"skipped":4405,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 13:06:42.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Aug 11 13:06:46.752: INFO: Successfully updated pod "annotationupdatee2343599-ef70-4cbd-9ddc-93b80c4d1a20"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:06:50.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1515" for this suite.

• [SLOW TEST:8.760 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":268,"skipped":4427,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 13:06:50.819: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 11 13:06:50.942: INFO: Waiting up to 5m0s for pod "pod-5c70c1ae-9813-44ff-8aa0-88b89323d1d3" in namespace "emptydir-501" to be "Succeeded or Failed"
Aug 11 13:06:50.957: INFO: Pod "pod-5c70c1ae-9813-44ff-8aa0-88b89323d1d3": Phase="Pending", Reason="", readiness=false. Elapsed: 14.694942ms
Aug 11 13:06:52.962: INFO: Pod "pod-5c70c1ae-9813-44ff-8aa0-88b89323d1d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019138624s
Aug 11 13:06:54.965: INFO: Pod "pod-5c70c1ae-9813-44ff-8aa0-88b89323d1d3": Phase="Running", Reason="", readiness=true. Elapsed: 4.022535044s
Aug 11 13:06:57.050: INFO: Pod "pod-5c70c1ae-9813-44ff-8aa0-88b89323d1d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.107457491s
STEP: Saw pod success
Aug 11 13:06:57.050: INFO: Pod "pod-5c70c1ae-9813-44ff-8aa0-88b89323d1d3" satisfied condition "Succeeded or Failed"
Aug 11 13:06:57.052: INFO: Trying to get logs from node kali-worker pod pod-5c70c1ae-9813-44ff-8aa0-88b89323d1d3 container test-container: 
STEP: delete the pod
Aug 11 13:06:57.335: INFO: Waiting for pod pod-5c70c1ae-9813-44ff-8aa0-88b89323d1d3 to disappear
Aug 11 13:06:57.654: INFO: Pod pod-5c70c1ae-9813-44ff-8aa0-88b89323d1d3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:06:57.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-501" for this suite.

• [SLOW TEST:6.871 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":269,"skipped":4483,"failed":0}
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 13:06:57.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-secret-8xq5
STEP: Creating a pod to test atomic-volume-subpath
Aug 11 13:06:59.293: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-8xq5" in namespace "subpath-7505" to be "Succeeded or Failed"
Aug 11 13:06:59.775: INFO: Pod "pod-subpath-test-secret-8xq5": Phase="Pending", Reason="", readiness=false. Elapsed: 481.271645ms
Aug 11 13:07:02.014: INFO: Pod "pod-subpath-test-secret-8xq5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.721180354s
Aug 11 13:07:04.296: INFO: Pod "pod-subpath-test-secret-8xq5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.002476586s
Aug 11 13:07:06.960: INFO: Pod "pod-subpath-test-secret-8xq5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.666773581s
Aug 11 13:07:08.965: INFO: Pod "pod-subpath-test-secret-8xq5": Phase="Running", Reason="", readiness=true. Elapsed: 9.67125775s
Aug 11 13:07:10.972: INFO: Pod "pod-subpath-test-secret-8xq5": Phase="Running", Reason="", readiness=true. Elapsed: 11.6786893s
Aug 11 13:07:13.361: INFO: Pod "pod-subpath-test-secret-8xq5": Phase="Running", Reason="", readiness=true. Elapsed: 14.067888045s
Aug 11 13:07:15.365: INFO: Pod "pod-subpath-test-secret-8xq5": Phase="Running", Reason="", readiness=true. Elapsed: 16.071304028s
Aug 11 13:07:17.368: INFO: Pod "pod-subpath-test-secret-8xq5": Phase="Running", Reason="", readiness=true. Elapsed: 18.074544537s
Aug 11 13:07:19.371: INFO: Pod "pod-subpath-test-secret-8xq5": Phase="Running", Reason="", readiness=true. Elapsed: 20.077699282s
Aug 11 13:07:21.433: INFO: Pod "pod-subpath-test-secret-8xq5": Phase="Running", Reason="", readiness=true. Elapsed: 22.139641413s
Aug 11 13:07:23.631: INFO: Pod "pod-subpath-test-secret-8xq5": Phase="Running", Reason="", readiness=true. Elapsed: 24.33754204s
Aug 11 13:07:25.727: INFO: Pod "pod-subpath-test-secret-8xq5": Phase="Running", Reason="", readiness=true. Elapsed: 26.433259574s
Aug 11 13:07:27.731: INFO: Pod "pod-subpath-test-secret-8xq5": Phase="Running", Reason="", readiness=true. Elapsed: 28.438040524s
Aug 11 13:07:29.734: INFO: Pod "pod-subpath-test-secret-8xq5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.441049269s
STEP: Saw pod success
Aug 11 13:07:29.734: INFO: Pod "pod-subpath-test-secret-8xq5" satisfied condition "Succeeded or Failed"
Aug 11 13:07:29.736: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-secret-8xq5 container test-container-subpath-secret-8xq5: 
STEP: delete the pod
Aug 11 13:07:29.883: INFO: Waiting for pod pod-subpath-test-secret-8xq5 to disappear
Aug 11 13:07:29.923: INFO: Pod pod-subpath-test-secret-8xq5 no longer exists
STEP: Deleting pod pod-subpath-test-secret-8xq5
Aug 11 13:07:29.923: INFO: Deleting pod "pod-subpath-test-secret-8xq5" in namespace "subpath-7505"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:07:29.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7505" for this suite.

• [SLOW TEST:32.243 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":270,"skipped":4489,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 13:07:29.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 13:07:30.552: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Aug 11 13:07:35.631: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 11 13:07:39.713: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Aug 11 13:07:39.758: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-1043 /apis/apps/v1/namespaces/deployment-1043/deployments/test-cleanup-deployment 82787283-1fc8-4eec-aa2c-aed7064f10e9 8575138 1 2020-08-11 13:07:39 +0000 UTC   map[name:cleanup-pod] map[] [] []  [{e2e.test Update apps/v1 2020-08-11 13:07:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004436b88  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},}

Aug 11 13:07:39.768: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
Aug 11 13:07:39.768: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Aug 11 13:07:39.769: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller  deployment-1043 /apis/apps/v1/namespaces/deployment-1043/replicasets/test-cleanup-controller 1e715774-789c-4d9c-9239-64290ef27ed1 8575140 1 2020-08-11 13:07:30 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 82787283-1fc8-4eec-aa2c-aed7064f10e9 0xc004436f3f 0xc004436f50}] []  [{e2e.test Update apps/v1 2020-08-11 13:07:30 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-11 13:07:39 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 50 55 56 55 50 56 51 45 49 102 99 56 45 52 101 101 99 45 97 97 50 99 45 97 101 100 55 48 54 52 102 49 48 101 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004436ff8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 11 13:07:39.818: INFO: Pod "test-cleanup-controller-l7t66" is available:
&Pod{ObjectMeta:{test-cleanup-controller-l7t66 test-cleanup-controller- deployment-1043 /api/v1/namespaces/deployment-1043/pods/test-cleanup-controller-l7t66 e4240bd7-fd64-4cc0-bc22-a70ade6c6807 8575132 0 2020-08-11 13:07:30 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 1e715774-789c-4d9c-9239-64290ef27ed1 0xc003b57a07 0xc003b57a08}] []  [{kube-controller-manager Update v1 2020-08-11 13:07:30 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 101 55 49 53 55 55 52 45 55 56 57 99 45 52 100 57 99 45 57 50 51 57 45 54 52 50 57 48 101 102 50 55 101 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-11 13:07:38 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 49 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6zhtd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6zhtd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6zhtd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 13:07:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 13:07:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 13:07:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 13:07:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.212,StartTime:2020-08-11 13:07:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-11 13:07:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7d6dbe74345cca793118b8d9ffa9c46fa989bdf6b5a73ee41f509656b5520b8f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.212,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:07:39.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1043" for this suite.

• [SLOW TEST:10.068 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":271,"skipped":4505,"failed":0}
SSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 13:07:40.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-3dffe984-939c-44ac-a1da-c5eea869bb6f
STEP: Creating configMap with name cm-test-opt-upd-da68268e-e7fe-469c-a907-0c4fbc4f2856
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-3dffe984-939c-44ac-a1da-c5eea869bb6f
STEP: Updating configmap cm-test-opt-upd-da68268e-e7fe-469c-a907-0c4fbc4f2856
STEP: Creating configMap with name cm-test-opt-create-f0bb35ce-e0ca-472f-b998-b06e63b4789f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:07:54.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4660" for this suite.

• [SLOW TEST:14.377 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":272,"skipped":4508,"failed":0}
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 13:07:54.380: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-77ba42b4-3143-40b0-928e-a74bbf57898b
STEP: Creating a pod to test consume configMaps
Aug 11 13:07:54.760: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-52e38d4c-189f-41b4-8e15-0f57548cdf9f" in namespace "projected-7598" to be "Succeeded or Failed"
Aug 11 13:07:54.825: INFO: Pod "pod-projected-configmaps-52e38d4c-189f-41b4-8e15-0f57548cdf9f": Phase="Pending", Reason="", readiness=false. Elapsed: 65.04782ms
Aug 11 13:07:56.961: INFO: Pod "pod-projected-configmaps-52e38d4c-189f-41b4-8e15-0f57548cdf9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201037147s
Aug 11 13:07:58.964: INFO: Pod "pod-projected-configmaps-52e38d4c-189f-41b4-8e15-0f57548cdf9f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204282202s
Aug 11 13:08:00.990: INFO: Pod "pod-projected-configmaps-52e38d4c-189f-41b4-8e15-0f57548cdf9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.230388243s
STEP: Saw pod success
Aug 11 13:08:00.990: INFO: Pod "pod-projected-configmaps-52e38d4c-189f-41b4-8e15-0f57548cdf9f" satisfied condition "Succeeded or Failed"
Aug 11 13:08:00.993: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-52e38d4c-189f-41b4-8e15-0f57548cdf9f container projected-configmap-volume-test: 
STEP: delete the pod
Aug 11 13:08:01.630: INFO: Waiting for pod pod-projected-configmaps-52e38d4c-189f-41b4-8e15-0f57548cdf9f to disappear
Aug 11 13:08:01.714: INFO: Pod pod-projected-configmaps-52e38d4c-189f-41b4-8e15-0f57548cdf9f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:08:01.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7598" for this suite.

• [SLOW TEST:7.545 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":273,"skipped":4512,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 13:08:01.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 13:08:02.734: INFO: Create a RollingUpdate DaemonSet
Aug 11 13:08:02.737: INFO: Check that daemon pods launch on every node of the cluster
Aug 11 13:08:03.081: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 13:08:03.083: INFO: Number of nodes with available pods: 0
Aug 11 13:08:03.083: INFO: Node kali-worker is running more than one daemon pod
Aug 11 13:08:04.275: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 13:08:04.278: INFO: Number of nodes with available pods: 0
Aug 11 13:08:04.278: INFO: Node kali-worker is running more than one daemon pod
Aug 11 13:08:05.087: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 13:08:05.090: INFO: Number of nodes with available pods: 0
Aug 11 13:08:05.090: INFO: Node kali-worker is running more than one daemon pod
Aug 11 13:08:06.088: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 13:08:06.093: INFO: Number of nodes with available pods: 0
Aug 11 13:08:06.093: INFO: Node kali-worker is running more than one daemon pod
Aug 11 13:08:07.495: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 13:08:07.554: INFO: Number of nodes with available pods: 0
Aug 11 13:08:07.554: INFO: Node kali-worker is running more than one daemon pod
Aug 11 13:08:08.195: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 13:08:08.213: INFO: Number of nodes with available pods: 0
Aug 11 13:08:08.213: INFO: Node kali-worker is running more than one daemon pod
Aug 11 13:08:09.142: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 13:08:09.171: INFO: Number of nodes with available pods: 0
Aug 11 13:08:09.171: INFO: Node kali-worker is running more than one daemon pod
Aug 11 13:08:10.208: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 13:08:10.643: INFO: Number of nodes with available pods: 2
Aug 11 13:08:10.643: INFO: Number of running nodes: 2, number of available pods: 2
Aug 11 13:08:10.643: INFO: Update the DaemonSet to trigger a rollout
Aug 11 13:08:10.877: INFO: Updating DaemonSet daemon-set
Aug 11 13:08:24.549: INFO: Roll back the DaemonSet before rollout is complete
Aug 11 13:08:24.563: INFO: Updating DaemonSet daemon-set
Aug 11 13:08:24.563: INFO: Make sure DaemonSet rollback is complete
Aug 11 13:08:24.583: INFO: Wrong image for pod: daemon-set-pm2kq. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 11 13:08:24.584: INFO: Pod daemon-set-pm2kq is not available
Aug 11 13:08:24.615: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 13:08:25.620: INFO: Wrong image for pod: daemon-set-pm2kq. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 11 13:08:25.620: INFO: Pod daemon-set-pm2kq is not available
Aug 11 13:08:25.623: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 13:08:26.620: INFO: Wrong image for pod: daemon-set-pm2kq. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 11 13:08:26.620: INFO: Pod daemon-set-pm2kq is not available
Aug 11 13:08:26.625: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 13:08:27.619: INFO: Wrong image for pod: daemon-set-pm2kq. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 11 13:08:27.619: INFO: Pod daemon-set-pm2kq is not available
Aug 11 13:08:27.630: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 13:08:28.761: INFO: Wrong image for pod: daemon-set-pm2kq. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 11 13:08:28.761: INFO: Pod daemon-set-pm2kq is not available
Aug 11 13:08:28.782: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 13:08:29.722: INFO: Wrong image for pod: daemon-set-pm2kq. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 11 13:08:29.722: INFO: Pod daemon-set-pm2kq is not available
Aug 11 13:08:29.726: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 13:08:31.146: INFO: Pod daemon-set-hm56n is not available
Aug 11 13:08:31.318: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 13:08:31.633: INFO: Pod daemon-set-hm56n is not available
Aug 11 13:08:31.639: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2729, will wait for the garbage collector to delete the pods
Aug 11 13:08:31.703: INFO: Deleting DaemonSet.extensions daemon-set took: 7.025582ms
Aug 11 13:08:32.104: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.280556ms
Aug 11 13:08:45.140: INFO: Number of nodes with available pods: 0
Aug 11 13:08:45.140: INFO: Number of running nodes: 0, number of available pods: 0
Aug 11 13:08:45.143: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2729/daemonsets","resourceVersion":"8575482"},"items":null}

Aug 11 13:08:45.238: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2729/pods","resourceVersion":"8575483"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:08:45.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2729" for this suite.

• [SLOW TEST:43.497 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":274,"skipped":4579,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 11 13:08:45.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 11 13:08:47.969: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config version'
Aug 11 13:08:49.031: INFO: stderr: ""
Aug 11 13:08:49.031: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.5\", GitCommit:\"e6503f8d8f769ace2f338794c914a96fc335df0f\", GitTreeState:\"clean\", BuildDate:\"2020-07-09T18:53:46Z\", GoVersion:\"go1.13.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.4\", GitCommit:\"c96aede7b5205121079932896c4ad89bb93260af\", GitTreeState:\"clean\", BuildDate:\"2020-06-20T01:49:49Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 11 13:08:49.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5122" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":275,"completed":275,"skipped":4689,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSAug 11 13:08:49.042: INFO: Running AfterSuite actions on all nodes
Aug 11 13:08:49.042: INFO: Running AfterSuite actions on node 1
Aug 11 13:08:49.042: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0}

Ran 275 of 4992 Specs in 5795.020 seconds
SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped
PASS